Probabilistic Methods.   Explanation: The paper discusses several computer algorithms for discovering patterns in groups of protein sequences that are based on fitting the parameters of a statistical model to a group of related sequences. These algorithms include hidden Markov model (HMM) algorithms for multiple sequence alignment, and the MEME and Gibbs sampler algorithms for discovering motifs. The paper presents a solution to the problem of convex combinations in the form of a heuristic based on using extremely low variance Dirichlet mixture priors as part of the statistical model. The paper analyzes the problem mathematically and shows how the proposed heuristic can effectively eliminate the problem of convex combinations in protein sequence pattern discovery. All of these aspects are related to probabilistic methods in AI.
Rule Learning.   Explanation: The paper describes the application of three machine learning algorithms (1R, FOIL, and InductH) to identify risk factors that govern the colposuspension cure rate. The goal is to induce a set of rules that describe which risk factors result in differences of cure rate. Therefore, the paper belongs to the sub-category of AI known as Rule Learning.
Reinforcement Learning. This paper belongs to the sub-category of Reinforcement Learning as it uses a RL method to find dynamic channel allocation policies that are better than previous heuristic solutions. The authors formulate the problem as a dynamic programming problem and use RL to solve it. They present results on a large cellular system and show that the policies obtained perform well for a broad variety of call traffic patterns.
Probabilistic Methods.   Explanation: The paper discusses the use of Markov decision processes (MDPs) and partially observable MDPs (POMDPs) to model decision-making in uncertain environments. These are probabilistic methods that involve estimating probabilities of different outcomes and using them to make decisions. The paper also discusses the use of algorithms for solving POMDPs, which involve probabilistic reasoning.
Probabilistic Methods.   Explanation: The paper proposes a variational methodology for probabilistic inference in graphical models. The focus is on enhancing the representational power of probability models through qualitative characterization of their properties, and on developing variational techniques that allow the computation of upper and lower bounds on the quantities of interest. The paper does not discuss Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods.   Explanation: The paper discusses real-time decision algorithms for evaluating influence diagrams, which are a type of probabilistic graphical model. The algorithms tested in the experiments are variants of probabilistic inference algorithms, such as Incremental Probabilistic Inference and the algorithm suggested by Goldszmidt. The paper also discusses the performance of these algorithms in a test domain, which is a common approach in probabilistic methods research.
Rule Learning, Theory.   Rule Learning is present in the paper as the authors develop a formal framework for learning efficient problem solving from random problems and their solutions, and apply this framework to two different representations of learned knowledge, namely control rules and macro-operators.   Theory is also present in the paper as the authors prove theorems that identify sufficient conditions for learning in each representation, and their proofs are constructive in that they are accompanied with learning algorithms. The paper also integrates many strands of experimental and theoretical work in machine learning.
Theory.   Explanation: The paper presents a theoretical approach to optimizing sequence alignments using finite automata-derived cost functions and extending Hirschberg's linear space algorithm. There is no mention of any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods.   Explanation: The paper describes the development of motif-based Hidden Markov Models (HMMs) to represent protein families. HMMs are a probabilistic method commonly used in bioinformatics to model sequence data. The authors use these models to identify conserved motifs within protein families and to predict the function of uncharacterized proteins based on their sequence similarity to known members of the family. The paper does not mention any other sub-categories of AI.
Theory.   Explanation: The paper presents a theoretical analysis of a new variant of the mistake-bound model of learning, comparing the performance of online and offline learners and characterizing the number of mistakes in the offline model. The paper does not involve the implementation or application of any specific AI sub-category such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods.   Explanation: The paper introduces a method for fitting smoothing spline ANOVA models to data from exponential families, and discusses how to calculate Bayesian confidence intervals for the estimates. This involves the use of probabilistic methods, which are concerned with modeling uncertainty and making predictions based on probability distributions.
Genetic Algorithms, Rule Learning.   Genetic Algorithms: The paper presents an evolutionary approach to finding learning rules, which is similar to Genetic Programming (GP). The potential solutions are represented as variable length mathematical LISP S-expressions, and the approach employs a fixed set of non-problem-specific functions to solve a variety of problems.   Rule Learning: The paper discusses the usefulness of the encoding schema in discovering learning rules for supervised learning problems, with an emphasis on hard learning problems. The potential solutions are represented as mathematical expressions, which can be seen as rules for solving the problems. The paper also discusses future research directions within the context of GP practices, which includes rule learning.
Probabilistic Methods.   Explanation: The paper describes a method for estimating marginal likelihoods, which is a key quantity needed for Bayesian hypothesis testing and model selection. The method involves using posterior simulation output and is based on the Laplace-Metropolis estimator. The paper also discusses the application of the method to models with random effects, which involves a compound Laplace-Metropolis estimator. These methods are all probabilistic in nature, as they involve estimating probabilities and likelihoods based on statistical models and simulations.
Probabilistic Methods, Reinforcement Learning, Theory.   Probabilistic Methods: The paper discusses modeling the trend of performance response during the course of learning, which involves probabilistic modeling.   Reinforcement Learning: The paper discusses a control system that can constrain the amount of learned knowledge to achieve peak performance, which is a key concept in reinforcement learning.   Theory: The paper discusses the general utility problem, which is a theoretical problem in machine learning. The paper also proposes a model that unifies different learning paradigms into one framework, which is a theoretical contribution.
Probabilistic Methods.   Explanation: The paper discusses the use of Hidden Markov Models (HMMs), which are a type of probabilistic model, in various applications related to protein modeling in computational biology. The paper focuses on the statistical modeling, database searching, and multiple sequence alignment of protein families and domains using HMMs. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Neural Networks.   Explanation: The paper explores the effect of initial weight selection on feed-forward networks learning simple functions with the back-propagation technique. Back-propagation is a commonly used algorithm for training neural networks. The paper specifically focuses on the sensitivity of back propagation to initial weight configuration. Therefore, this paper belongs to the sub-category of AI known as Neural Networks.
Reinforcement Learning, Probabilistic Methods  Explanation:  This paper belongs to the sub-category of Reinforcement Learning because it discusses the use of active learning, which involves an agent making decisions based on feedback from its environment. The paper also discusses the use of probabilistic methods, such as Bayesian optimization, to guide the agent's exploration. These methods involve using probability distributions to model uncertainty and make decisions based on the most likely outcomes.
Neural Networks.   Explanation: The paper discusses a neural network model of memory consolidation, which incorporates known features of consolidation and is designed to simulate the transfer of memory from the medial temporal lobe to the neocortex. The paper also proposes several experiments to evaluate the performance of the model and implements an extended version of the model to examine its performance on the original task. There is no mention of any other sub-category of AI in the text.
Neural Networks, Rule Learning.   Neural Networks: The paper presents a computational model based on a form of competitive learning, which is a type of neural network.   Rule Learning: The model used in the paper includes a weight normalization rule, which is a type of rule learning.
Probabilistic Methods.   Explanation: The paper discusses methods for estimating the average and variance of test error rates over a set of classifiers, which involves probabilistic reasoning and statistical analysis. The authors consider the process of drawing a classifier at random for each example and using it on all examples, and they discuss how to estimate the expected test error rate and variance in each case. These methods rely on probabilistic models and assumptions about the distribution of errors and classifiers, making this paper most closely related to the sub-category of Probabilistic Methods in AI.
Probabilistic Methods.   The paper discusses the formal model of learning from examples called "probably approximately correct" (PAC) learning, which is a probabilistic method for learning from noisy data. The paper also describes a learning environment based on a natural combination of two noise models, and proposes a technique for learning in this environment based on statistical query learning. The paper shows that the noise tolerance of this technique is roughly optimal with respect to the desired learning accuracy and provides a smooth tradeoff between the tolerable amounts of the two types of noise. Therefore, the paper is primarily related to probabilistic methods in AI.
Reinforcement Learning, Rule Learning.   Reinforcement learning is the main focus of the paper, as the authors present a decision tree based approach to function approximation in reinforcement learning. They compare their approach with other function approximators on three reinforcement learning problems.   Rule learning is also present in the paper, as the decision tree is a type of rule-based model. The authors use the decision tree to approximate the optimal policy in the reinforcement learning problems they consider.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper presents an approach to develop new game playing strategies based on artificial evolution of neural networks.   Neural Networks: The approach presented in the paper is based on the use of neural networks to discover strategies in Othello against a random-moving opponent and later against an ff-fi search program. The paper demonstrates how evolutionary neural networks can develop novel solutions by turning an initial disadvantage into an advantage in a changed environment.
Probabilistic Methods.   Explanation: The paper discusses the use of Markov Chain Monte Carlo (MCMC) in Item Response Theory (IRT) to handle multiple item types, missing data, and rated responses. MCMC is a probabilistic method used to generate samples from a probability distribution, which is then used to estimate parameters in IRT models. Therefore, this paper belongs to the sub-category of Probabilistic Methods in AI.
Reinforcement Learning, Neural Networks, Probabilistic Methods.  Reinforcement Learning is present in the text as the paper discusses the role of transfer in learning, which is a key concept in reinforcement learning. The paper also mentions the use of reinforcement learning algorithms in transfer learning.  Neural Networks are present in the text as the paper discusses the use of neural networks in transfer learning. The paper also mentions the use of deep neural networks in transfer learning.  Probabilistic Methods are present in the text as the paper discusses the use of probabilistic models in transfer learning. The paper also mentions the use of Bayesian methods in transfer learning.
Theory. The paper focuses on deriving general bounds on the complexity of learning in the Statistical Query model and in the PAC model with classification noise. It does not discuss any specific application or implementation of AI, but rather provides theoretical results and analysis.
Neural Networks.   Explanation: The paper discusses the potential and limitations of neural network methods for various classes of applications. It also compares different types of neural networks, such as supervised, unsupervised, and generalizing systems. There is no mention of other sub-categories of AI such as case-based, genetic algorithms, probabilistic methods, reinforcement learning, rule learning, or theory.
Probabilistic Methods.   Explanation: The paper discusses the use of Bayesian inference, which is a probabilistic method. The focus of the paper is on the construction of prior distributions, which are a fundamental component of Bayesian inference. The paper reviews various techniques for constructing priors, including Jeffreys's rules, which are based on probabilistic considerations. The paper also discusses the practical and philosophical issues that arise when using priors constructed by formal rules. Therefore, the paper is most closely related to the sub-category of Probabilistic Methods within AI.
Neural Networks.   Explanation: The paper discusses the use of NARX (Nonlinear AutoRegressive models with eXogenous inputs) neural network models for system identification and time series prediction. It also proposes a method for improving the performance of these models through intelligent memory order selection. The entire paper is focused on the use and optimization of neural networks, making it most closely related to this sub-category of AI.
Probabilistic Methods.   Explanation: The paper presents an approach that uses a stochastic complexity formula to guide the learning process, and the approach is organized as a simulated annealing-based beam search. These are both examples of probabilistic methods, which involve the use of probability theory to model uncertainty and make decisions.
Case Based.   Explanation: The paper discusses the limitations of existing case-based reasoning (CBR) systems in supporting creative design and proposes an extension of the standard CBR framework to facilitate exploration of ideas and problem elaboration. The entire paper is focused on case-based reasoning and its application in design, making it the most related sub-category of AI.
Probabilistic Methods.   Explanation: The paper presents a framework for building probabilistic automata using Gibbs distributions to model state transitions and output generation. The EM algorithm is used for parameter estimation, which is a common technique in probabilistic modeling. The paper also discusses the relationship with certain classes of stochastic feedforward neural networks, but the focus is on the probabilistic modeling aspect. There is no mention of case-based reasoning, genetic algorithms, reinforcement learning, rule learning, or theory in the paper.
Case Based, Rule Learning  Explanation:   - Case Based: The paper discusses the use of memory-based techniques to store, organize, retrieve, and reuse experiential knowledge held in memory, which is a characteristic of case-based reasoning. - Rule Learning: The paper describes demex, an interactive computer-aided design system that employs memory-based techniques to help its users explore the design problems they pose to the system, in order to acquire a better understanding of the requirements of the problems. This involves learning rules or patterns from past experiences to aid in the exploration process.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes a new algorithm called up-propagation that uses layered neural networks to generate hypotheses and revise them. The algorithm is benchmarked against principal component analysis in experiments on images of handwritten digits.  Probabilistic Methods: The paper discusses the doctrine of unconscious inference, which argues that perceptions are formed by the interaction of bottom-up sensory data with top-down expectations. The up-propagation algorithm utilizes a negative feedback loop driven by an error signal from the bottom-up connections, which is also used for learning the connections. This error signal is a probabilistic measure of the difference between the generated pattern and the sensory input.
Reinforcement Learning, Case Based  Explanation:  The paper primarily belongs to the sub-category of Reinforcement Learning as it discusses the use of reinforcement learning to generate surfaces that represent the optimum choice of actions to achieve a goal. It also talks about how the system can identify when a similar task has been solved previously and retrieve the relevant surface, resulting in a faster learning rate.  Additionally, the paper also belongs to the sub-category of Case Based as it demonstrates the use of a case base of surfaces to speed up reinforcement learning. The system indexes into this case base to retrieve relevant surfaces for similar tasks, which is a key feature of case-based reasoning.
Genetic Algorithms, Reinforcement Learning  Explanation:  - Genetic Algorithms: The paper demonstrates the use of genetic algorithms in conjunction with lazy learning to solve a class of reinforcement learning problems. The experiments conducted in the paper apply three learning approaches, including a genetic algorithm, to a pursuit game. The genetic algorithm is also used as a bootstrapping method for k-NN to create a system to provide good examples for lazy learning. - Reinforcement Learning: The paper focuses on solving a class of delayed reinforcement learning problems, specifically differential games, using machine learning algorithms. The experiments conducted in the paper apply three reinforcement learning approaches, including lazy Q-learning, k-nearest neighbor (k-NN), and a genetic algorithm, to a pursuit game. The paper also suggests that solutions for differential games can provide solution strategies for the general class of planning and control problems.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper describes a hierarchical, generative model that can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly.   Probabilistic Methods: The model uses Bayesian perceptual inference to perform probabilistic reasoning. The connection strengths can be updated using a very simple learning rule that only requires locally available information.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper discusses the use of genetic algorithms in neuro-evolution, specifically in the context of evolving individual neurons and complete neural networks. It also presents a hierarchical approach to genetic search that overcomes the limitations of a purely neuron-based search.   Neural Networks: The paper focuses on the evolution of neural networks, specifically exploring the benefits and limitations of evolving individual neurons versus complete networks. It also presents a hierarchical approach to neuro-evolution that integrates both neuron-level and network-level searches.
Genetic Algorithms, Neural Networks, Reinforcement Learning  Genetic Algorithms: The paper discusses the use of genetic algorithms in evolutionary robotics to evolve the behavior and morphology of robots. It explains how genetic algorithms work and how they can be used to optimize robot performance.  Neural Networks: The paper also discusses the use of neural networks in evolutionary robotics. It explains how neural networks can be used to control the behavior of robots and how they can be evolved using genetic algorithms.  Reinforcement Learning: The paper briefly mentions the use of reinforcement learning in evolutionary robotics. It explains how reinforcement learning can be used to train robots to perform specific tasks and how it can be combined with genetic algorithms to evolve robot behavior.
Reinforcement Learning.   Explanation: The paper explicitly discusses reinforcement learning as the problem it addresses and presents the SKILLS algorithm as a solution to scale reinforcement learning to complex real-world tasks. None of the other sub-categories of AI are mentioned or discussed in the paper.
Theory.   Explanation: The paper presents a theoretical analysis of a generalization of the mistake-bound model for learning f0; 1g-valued functions, and proposes a general-purpose optimal algorithm for the problem. The paper does not involve any specific implementation or application of AI techniques such as neural networks, probabilistic methods, or reinforcement learning.
Probabilistic Methods.   Explanation: The paper discusses methods for estimating characteristics of a distribution of interest using Markov Chain Monte Carlo (MCMC) methods, which are probabilistic methods. The paper specifically focuses on convergence diagnostics for MCMC, which are used to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution. The paper does not discuss any other sub-categories of AI.
Genetic Algorithms, Rule Learning  The paper belongs to the sub-categories of Genetic Algorithms and Rule Learning.   Genetic Algorithms: The paper proposes an evolutionary algorithm for acquiring modules that can be used to solve new problems. The algorithm uses a genetic algorithm to evolve a population of modules that are evaluated based on their fitness in solving a given problem. The fittest modules are then selected for further evolution, while the weaker ones are discarded. This process is repeated until a satisfactory set of modules is obtained.  Rule Learning: The paper also discusses the use of decision trees as a means of acquiring modules. Decision trees are a type of rule-based learning algorithm that can be used to learn a set of rules that can be used to solve a given problem. The paper proposes a method for using decision trees to learn modules that can be used to solve new problems. The decision trees are trained on a set of training examples, and the resulting rules are used to construct a module that can be used to solve new problems.
Neural Networks.   Explanation: The paper discusses a learning rule for a two-layer neural network to extract invariant information from input patterns. The focus is on connectionist learning rules, which are a key aspect of neural networks. Other sub-categories of AI, such as Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, and Case Based methods, are not mentioned in the text.
Case Based, Reinforcement Learning  Explanation:  This paper belongs to the sub-category of Case Based AI because it discusses instance-based learning methods, which are a type of case-based reasoning. The paper also belongs to the sub-category of Reinforcement Learning because it discusses the advantages of instance-based methods for autonomous systems, which often use reinforcement learning techniques.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses the use of discrete Bayesian models to model uncertainty in mobile-robot navigation.   Reinforcement Learning: The paper presents the optimal solution to the problem of how actions should be chosen in mobile-robot navigation, formulated as a partially observable Markov decision process. It also explores a variety of heuristic control strategies, which can be seen as a form of reinforcement learning. The control strategies are compared experimentally, both in simulation and on a robot.
The paper belongs to the sub-category of AI known as Neural Networks. This is evident from the title of the journal in which it was published, as well as from the abstract which mentions "IEEE Transactions on Neural Networks". The paper describes a system of reflective agents that use neural networks to process information and make decisions. The use of neural networks is also mentioned in the abstract, where it states that the paper is "available as GMD report #794".
Case Based, Rule Learning  Explanation:  - Case Based: The paper describes an implementation and experiment with Salzberg's Nested Generalized Exemplars algorithm, which is a case-based method for classification.  - Rule Learning: The paper mentions the implementation of the NGE algorithm, which involves learning rules from examples. The author also notes a curious result while using the algorithm, which could be interpreted as a discovery of new rules or patterns in the data.
Probabilistic Methods. This paper belongs to the sub-category of Probabilistic Methods in AI. The paper presents a new Markov chain sampling method for distributions with isolated modes. The method uses a series of distributions that interpolate between the distribution of interest and a distribution for which sampling is easier. The method involves systematic movement from the desired distribution to the easily-sampled distribution and back to the desired distribution. The paper discusses the efficiency of the method in simple and complex distributions and how it compares to simulated tempering. The entire paper is focused on probabilistic methods for sampling from multimodal distributions.
Probabilistic Methods.   Explanation: The paper describes the use of fuzzy methods to represent uncertainty in the student model, which is a common technique in probabilistic modeling. Additionally, the ML-Modeler component uses machine learning techniques to infer the student's learning methods and generate hypotheses about their misconceptions and errors, which can also be seen as a probabilistic approach to modeling the student's knowledge state.
Case Based, Theory.   The paper belongs to the sub-category of Case Based AI because it describes a case-based approach to Introspection Planning which utilises previous experience obtained during reasoning at the meta-level and at the object level. The paper also belongs to the sub-category of Theory because it discusses the concept of metacognition and how it can be improved through the use of mental models and introspection planning.
Probabilistic Methods.   Explanation: The paper discusses the use of graphical models, specifically chain graphs, to represent possible dependences among statistical variables. These models use probabilistic methods to analyze and infer relationships between variables. The paper also mentions Bayesian belief networks, which are a type of probabilistic graphical model.
Theory.   Explanation: The paper discusses the development of a theory revision method for fault hierarchies in expert systems, which operates directly on the fault hierarchy representation. The paper does not mention any of the other sub-categories of AI listed in the question.
Genetic Algorithms.   Explanation: The paper focuses on the use of distributed genetic algorithms for partitioning uniform grids. The authors describe the genetic algorithm approach in detail, including the use of fitness functions, crossover and mutation operators, and selection strategies. They also discuss the benefits of using a distributed approach, which allows for parallel processing and improved scalability. While other sub-categories of AI may be relevant to this topic, such as probabilistic methods or theory, the primary focus of the paper is on the use of genetic algorithms.
Reinforcement Learning, Theory.   Reinforcement learning is present in the paper as the authors describe a framework in which a competitive algorithm makes repeated use of a strategy learning component that can learn strategies which defeat a given set of opponents.   Theory is also present in the paper as the authors provide a theoretical analysis of game learning, including a complexity analysis and listing new questions arising from their work.
This paper belongs to the sub-category of AI known as Genetic Algorithms.   Explanation: The title of the paper explicitly mentions "Genetic Algorithms" and the abstract describes the paper as a comparison of selection schemes used in genetic algorithms. Therefore, it is clear that the paper is focused on the use and analysis of genetic algorithms. None of the other sub-categories of AI are mentioned or discussed in the paper.
Theory.   Explanation: The paper proposes a new approach for bounding the generalization error of a learning algorithm after the data has been observed, which is a theoretical concept. The paper does not discuss any specific AI subfield such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Reinforcement Learning, Probabilistic Methods, Theory.   Reinforcement learning is the main focus of the paper, as the authors propose a new framework for studying Markov decision processes (MDPs) and the goal of learning in MDPs is to find a policy that yields the maximum expected return over time.   Probabilistic methods are used to analyze the energy landscape over policy space, and the authors calculate the overall distribution of expected returns as well as the distribution of returns for policies at a fixed Hamming distance from the optimal one.   The paper also falls under the category of Theory, as the authors use methods from statistical mechanics to analyze the energy landscape in the thermodynamic limit N ! 1 and discuss the problem of learning optimal policies from empirical estimates of the expected return.
Neural Networks, Theory.   Explanation:  The paper is primarily focused on analyzing the theoretical properties of neural networks, specifically their VC dimension. The paper does not discuss any practical applications or implementations of neural networks, which would be more relevant to sub-categories such as Reinforcement Learning or Probabilistic Methods. Therefore, Neural Networks is the most related sub-category. Additionally, the paper falls under the Theory sub-category as it presents a mathematical proof and analysis of the VC dimension of neural networks.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes novel on-line learning algorithms for blind separation of signals using neural networks. The paper investigates the validity, performance, and dynamic properties of the proposed learning algorithms through computer simulation experiments.  Probabilistic Methods: The paper discusses the ability of the proposed neural network models to exhibit random switch of attention or chaotic switching of output signals. This suggests the use of probabilistic methods in the proposed learning algorithms.
Reinforcement Learning, Neural Networks  The paper belongs to the sub-category of Reinforcement Learning as it presents a learning algorithm based on incremental dynamic programming to solve multiple Markovian decision tasks (MDTs). The agent learns to solve a set of composite and elemental MDTs by producing a temporal decomposition. The paper also belongs to the sub-category of Neural Networks as it presents a modular network architecture that allows a single learning agent to learn to solve multiple MDTs with significant transfer of learning across the tasks. The architecture is trained on a set of composite and elemental MDTs.
Theory.   Explanation: The paper describes a methodology for deductive program synthesis to construct numerical simulation codes, and a system that uses first order Horn logic to synthesize numerical simulators built from numerical integration and root extraction routines. The paper does not mention any of the other sub-categories of AI listed.
Probabilistic Methods.   Explanation: The paper discusses Bayesian networks, which are a probabilistic graphical model used for representing and reasoning about uncertain knowledge. The paper proposes a formal notion of context-specific independence (CSI) based on regularities in the conditional probability tables (CPTs) at a node, and suggests a qualitative representation scheme for capturing CSI. The paper also discusses ways in which this representation can be used to support effective inference algorithms.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as it introduces a new kind of eligibility trace for this type of learning. The paper also analyzes the theoretical properties and efficiency of the conventional and replacing trace methods. Therefore, reinforcement learning is the most related sub-category of AI.   Theory is also applicable as the paper provides a theoretical analysis of the replacing trace method and its comparison to the conventional trace method. The paper shows that the replacing trace method results in faster and more reliable learning, and is closely related to the maximum likelihood solution for the tasks analyzed.
Theory.   Explanation: The paper presents a functional theory of the complete reading process, which integrates results from psychology, artificial intelligence, and education. While the paper does mention some AI sub-categories such as case-based reasoning and rule learning, these are not the main focus of the paper and are not developed in detail. Therefore, the paper is best categorized as belonging to the Theory sub-category of AI.
This paper does not belong to any sub-category of AI as it is an error message related to font substitution in a dvi file.
Case Based.   Explanation: The paper is solely focused on case-based reasoning, which is a sub-category of AI that involves solving new problems by adapting solutions from similar past problems. The paper discusses foundational issues, methodological variations, and system approaches related to case-based reasoning. There is no mention of genetic algorithms, neural networks, probabilistic methods, reinforcement learning, rule learning, or theory in the paper.
Theory.   Explanation: The paper is focused on the problem of combining updates and counterfactual conditionals in propositional knowledgebases, which is a topic related to theory change in AI. The paper presents a decidable logic, called VCU 2, that has both update and counterfactual implication as connectives in the object language. The paper also discusses the semantics of VCU 2, which is based on possible worlds, and presents a sound and complete axiomatization. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper proposes a probabilistic method for finding "algorithmically simple" problem solutions with high generalization capability. The method is based on Levin complexity and inspired by Levin's optimal universal search algorithm. The probabilistic search algorithm finds the "good" programs (the ones quickly computing algorithmically probable solutions fitting the training data).  Theory: The paper reviews some basic concepts of algorithmic complexity theory relevant to machine learning, and how the Solomonoff-Levin distribution (or universal prior) deals with the prior problem. The method is based on Levin complexity (a time-bounded generalization of Kolmogorov complexity) and inspired by Levin's optimal universal search algorithm.
Case Based, Rule Learning  Explanation:  This paper belongs to the sub-category of Case Based AI because it describes a system that infers possible expressive transformations for a new phrase based on similarity criteria with a set of cases (examples) of expressive performances. The system uses background musical knowledge to apply these transformations to the new phrase.   It also belongs to the sub-category of Rule Learning because the system uses rules based on the extracted expressive parameters to infer the possible transformations for the new phrase. These rules are learned from the set of cases and are used to guide the transformation process.
Neural Networks, Theory.   Neural Networks: The paper discusses the application of techniques used in the analysis of neural networks with small weights to voting methods.   Theory: The paper presents a theoretical explanation for the effectiveness of boosting and the observed phenomenon of the test error not increasing as the size of the generated classifier becomes very large. The paper also compares its explanation to those based on the bias-variance decomposition.
Probabilistic Methods.   Explanation: The paper presents a framework based on maximum likelihood density estimation using mixture models for the density estimates. The Expectation-Maximization (EM) principle is used for both the estimation of mixture components and for coping with missing data. These are all probabilistic methods commonly used in machine learning.
Neural Networks, Case Based.   Neural Networks: The paper describes a hierarchical feature map system that recognizes an input story by classifying it at three levels using a self-organizing process. The system uses a pyramid of feature maps to visualize the taxonomy and lay out the topology of each level.   Case Based: The paper describes how the recognition taxonomy, i.e. the breakdown of each script into tracks and roles, is extracted automatically and independently for each script from examples of script instantiations in an unsupervised self-organizing process. This process resembles human learning in that the differentiation of the most frequently encountered scripts become gradually the most detailed. The resulting structure serves as memory organization for script-based episodic memory.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper describes a system that uses an adaptive neural controller to learn the sequential generation of fovea trajectories for target detection.   Reinforcement Learning: The task is described as a "reward-only-at-goal" task, which involves a complex temporal credit assignment problem. The system learns to generate fovea trajectories through trial and error, without a teacher providing desired activations of "eye-muscles" at various times. The system also learns to track moving targets.
Probabilistic Methods.   Explanation: The paper presents a statistical model based on a hierarchical mixture model, which is a probabilistic method. The EM algorithm is also a probabilistic method used for maximum likelihood estimation.
This paper belongs to the sub-category of AI called Case Based.   Explanation: The paper proposes a memory model for case retrieval, which is a key component of case-based reasoning (CBR) systems. CBR is a subfield of AI that involves solving new problems by adapting solutions from similar past cases. The paper describes how the proposed memory model uses activation passing to retrieve relevant cases from memory and adapt them to solve new problems. This approach is a fundamental aspect of case-based reasoning.
Probabilistic Methods.   Explanation: The paper discusses the EM algorithm, which is a probabilistic method for maximum likelihood estimation in data with unobserved variables. The paper also discusses variants of the EM algorithm that exploit different properties of the data, such as sparsity and incremental updates.
Neural Networks.   Explanation: The paper describes a network of Wilson-Cowan oscillators, which are a type of neural network model. The paper investigates the emergent properties of synchronization and desynchronization in this network, which are characteristic behaviors of neural networks. The paper also discusses the use of a Hebbian rule for changing coupling strengths, which is a common learning rule used in neural networks. Overall, the paper is focused on the behavior and analysis of a neural network model, making it most closely related to the sub-category of Neural Networks.
Probabilistic Methods.   Explanation: The paper describes the implementation of a probabilistic regression model using a simulation technique known as Gibbs sampling. The focus is on Bayesian inference, which is a probabilistic approach to statistical modeling. There is no mention of case-based reasoning, genetic algorithms, neural networks, reinforcement learning, rule learning, or theory in the text.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the authors built a classifier based on decision trees, which is a type of rule-based learning.   Probabilistic Methods are also present in the text as the authors mention that the classifier learns how to classify unseen data, which implies that it uses probabilistic methods to make predictions based on the probability of a certain outcome given certain input features.
Neural Networks.   Explanation: The paper specifically discusses the implementation of neural networks using SAS software. While other sub-categories of AI may be used in conjunction with neural networks, they are not the focus of this paper.
Probabilistic Methods.   Explanation: The paper discusses a modification to Kyburg's Evidential Probability system, which is a probabilistic method for reasoning under uncertainty. The paper proposes a new scheme for selecting the right reference class and interval, which is still within the framework of probabilistic methods.
Reinforcement Learning.   Explanation: The paper explicitly mentions the use of reinforcement learning methods to learn domain-specific heuristics for job shop scheduling. The temporal difference algorithm TD() is applied to train a neural network to learn a heuristic evaluation function over states, which is then used by a one-step look-ahead search procedure to find good solutions to new scheduling problems. The results suggest that reinforcement learning can provide a new method for constructing high-performance scheduling systems. There is no mention of any other sub-category of AI in the paper.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper presents a neural network approach to the inverted pendulum task, which is used to control a mini-robot in real-time. The learning scheme is based on a neural network that learns the proper actions for balancing the pole given the current state of the system and a failure signal.  Reinforcement Learning: The paper describes the inverted pendulum task as a complex control-learning problem, where the controller must learn the proper actions for successfully balancing the pole given only the current state of the system and a failure signal. The approach presented in the paper is based on reinforcement learning, where the controller learns from feedback in the form of a failure signal.
Probabilistic Methods.   Explanation: The paper discusses the use of Bayes factors, which are a probabilistic method for comparing statistical models. The authors propose an approximate method for calculating Bayes factors in generalized linear models and also discuss how to account for model uncertainty. There is no mention of any other sub-category of AI in the paper.
Reinforcement Learning, Neural Networks.   Reinforcement learning is the main focus of the paper, as the modified Platt's resource-allocation network (RAN) is designed for a reinforcement-learning paradigm. The Q-learning network used to solve the inverted pendulum problem is also a type of reinforcement learning.   Neural networks are also relevant, as the modified RAN uses hidden units that continue to learn via back-propagation after being restarted. The Q-learning network is also a type of neural network.
This paper does not belong to any of the sub-categories of AI listed. It is focused on a new processing paradigm for exploiting fine-grain parallelism and does not discuss any AI techniques or applications.
This paper belongs to the sub-category of AI known as Case Based.   Explanation:  The paper proposes a hybrid algorithm that combines the nearest-neighbor and nearest-hyperrectangle methods for classification tasks. The algorithm is based on the idea of using past cases to make decisions about new cases, which is a key characteristic of case-based reasoning. The authors also discuss the use of similarity measures and feature selection, which are common techniques in case-based reasoning. Therefore, this paper is most closely related to the sub-category of Case Based AI.
Probabilistic Methods, Case Based.   Probabilistic Methods: The paper discusses the use of Hoeffding Races, which is a probabilistic method for finding a good model for the data by quickly discarding bad models and concentrating the computational effort at differentiating between the better ones.  Case Based: The paper focuses on the special case of leave-one-out cross validation applied to memory-based learning algorithms, which is a type of case-based reasoning.
Theory. The paper presents a theoretical analysis of the computational complexity of a learning problem and does not involve the implementation or application of any specific AI technique.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper describes an unsupervised learning algorithm for a multilayer network of stochastic neurons. The algorithm involves bottom-up recognition connections and top-down generative connections to convert input into representations in successive hidden layers and reconstruct the representation in one layer from the representation in the layer above.   Probabilistic Methods: The wake-sleep algorithm involves adapting recognition and generative connections to increase the probability of producing the correct activity vector in the layer below and above, respectively. The aim of learning is to minimize the description length, which is the total number of bits required to communicate the input vectors in a certain way, and this forces the network to learn economical representations that capture the underlying regularities in the data. The paper also mentions that the neurons in the network are stochastic.
Neural Networks.   Explanation: The paper proposes the use of an artificial neural network for structuring a software library based on the semantic similarity of the stored software components. The specific type of neural network used is an unsupervised learning model that makes the semantic relationship between the components geographically explicit. There is no mention of any other sub-category of AI in the text.
Neural Networks, Explanation-Based Learning.   Neural Networks are mentioned in the text as an example of an inductive learning method, specifically Backpropagation. Explanation-Based Learning is mentioned as an example of an analytical learning method. The paper discusses the combination of these two types of learning in the Explanation Based Neural Network learning (EBNN) mechanism.
Probabilistic Methods.   Explanation: The paper discusses the use of Gibbs sampling, a probabilistic method, for linkage analysis in large pedigrees with many loops. The authors propose a blocking Gibbs sampling algorithm to improve the efficiency of the analysis. The paper also discusses the use of Markov Chain Monte Carlo (MCMC) methods, which are another type of probabilistic method commonly used in Bayesian inference.
Probabilistic Methods.   Explanation: The paper discusses simulation algorithms that use probabilistic methods to approximate random set models in stochastic geometry. The Coupling from the Past (CFTP) method proposed by Propp and Wilson is a probabilistic method that delivers perfect simulations of Markov chains. The paper also mentions the equilibrium distribution of a spatial birth-and-death process, which is a probabilistic concept. Therefore, this paper belongs to the sub-category of Probabilistic Methods in AI.
Probabilistic Methods.   Explanation: The paper proposes a Bayesian approach to detect clusters and discontinuities in disease maps. Bayesian methods are a type of probabilistic method that use prior knowledge and data to make probabilistic inferences about unknown parameters. The authors use Bayesian hierarchical models to estimate the spatial distribution of disease rates and identify clusters and discontinuities. They also use Bayesian model selection to compare different models and choose the best one. Therefore, this paper belongs to the sub-category of Probabilistic Methods in AI.
Rule Learning, Case Based.   The paper primarily focuses on the combination of case-based reasoning and rule induction techniques, which falls under the sub-category of Rule Learning. The approach presented in the paper attempts to induce rules for a particular context, which can be used for indexing and similarity assessment to support the CBR process. This integration of rule induction with CBR is a key aspect of the paper. Additionally, the paper discusses the usefulness of CBR in complex domains, which falls under the sub-category of Case Based AI.
This paper belongs to the sub-category of AI known as Probabilistic Methods.   Explanation: The paper describes a probabilistic approach to simultaneously estimate weighting, smoothing, and physical parameters for numerical weather prediction models. The authors use a Bayesian framework to derive a joint posterior distribution of the parameters, which is then sampled using Markov Chain Monte Carlo (MCMC) methods. The paper also discusses the use of ensemble methods to estimate the uncertainty in the model predictions. Therefore, the paper is primarily focused on probabilistic methods for parameter estimation and uncertainty quantification.
Reinforcement Learning, Rule Learning  The paper belongs to the sub-categories of Reinforcement Learning and Rule Learning.   Reinforcement Learning is present in the paper as the robot agent plans a path to avoid the adversary while fulfilling the goal requirements. The agent learns from its past experiences and adjusts its actions accordingly to maximize the reward.   Rule Learning is present in the paper as the authors use a finite automata learning algorithm to generate a model of the adversarial robot's behavior. The automaton is used to predict the next move of the adversary, and the robot agent plans a path to avoid it based on the learned rules.
Probabilistic Methods.   Explanation: The paper describes a Bayesian approach to forecasting multinomial time series using conditionally Gaussian dynamic models. The use of Bayesian methods involves probabilistic modeling and inference.
Probabilistic Methods, Genetic Algorithms, Theory.   Probabilistic Methods: The paper explores the use of transient Markov chain analysis to model and understand the behavior of finite population GAFOs observed while in transition to steady states. This is a probabilistic method.  Genetic Algorithms: The paper is specifically focused on the properties of genetic algorithms (GAs) being used for function optimization (GAFOs). The paper explores the use of Markov chains to analyze GAFOs.  Theory: The paper discusses the theoretical understanding of the properties of genetic algorithms being used for function optimization. The paper explores the use of transient Markov chain analysis to model and understand the behavior of finite population GAFOs observed while in transition to steady states. The paper provides new insights into the circumstances under which GAFOs will (will not) perform well.
Neural Networks.   Explanation: The paper discusses the application of training with noise in multi-layer perceptron, which is a type of neural network. The proposed algorithm is designed to determine the relevance of input variables in the neural network. The paper does not mention any other sub-categories of AI such as case-based reasoning, genetic algorithms, probabilistic methods, reinforcement learning, or rule learning.
Rule Learning, Theory.   Explanation:  The paper presents a new algorithm for constructing decision trees, which falls under the sub-category of Rule Learning in AI. The paper also discusses the theoretical aspects of the algorithm and presents empirical results for various domains, indicating the presence of Theory in the paper. The other sub-categories of AI (Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning) are not directly relevant to the content of the paper.
Reinforcement Learning. This paper belongs to the Reinforcement Learning sub-category of AI. The paper provides a comprehensive overview of Reinforcement Learning methods and presents an application to the attitude control of a satellite. The paper also discusses the mathematical background of RL, which is closely related to optimal control and dynamic programming. The other sub-categories of AI, such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, and Rule Learning, are not mentioned in the text.
Neural Networks, Self-Organizing, Activity-Dependent Weight Modification.   This paper belongs to the sub-category of Neural Networks in AI. The paper presents a biologically motivated mechanism for self-organizing a neural network with modifiable lateral connections. The weight modification rules are purely activity-dependent, unsupervised, and local. The lateral interaction weights are initially random but develop into a "Mexican hat" shape around each neuron. This self-organizing feature map demonstrates how self-organization can bootstrap itself using input information.
Probabilistic Methods, Theory  Probabilistic Methods: The paper discusses the use of statistical methods in various fields such as economics, social sciences, and epidemiology. It also mentions the lack of mathematical notation to distinguish causal from equational relationships. Graphical methods are proposed as a solution to this problem, which can revolutionize how statistics is used in knowledge-rich applications.  Theory: The paper discusses the concept of causality and its mathematical underpinnings. It also outlines future challenges in this area.
Rule Learning, Theory.   The paper describes a method for inducing logic programs from examples, which falls under the sub-category of Rule Learning. The paper also presents a new framework that integrates existing ILP methods, which involves theoretical considerations.
Probabilistic Methods.   Explanation: The paper discusses techniques for computing upper and lower bounds on marginal probabilities in sigmoid and noisy-OR networks, which are probabilistic models. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods. This paper belongs to the sub-category of Probabilistic Methods in AI. The paper discusses the development of a formalism for approximating large probabilistic networks, which can be integrated with exact methods whenever they are applicable. The approximations used in the formalism maintain consistently upper and lower bounds on the desired quantities at all times. The paper also discusses the handling of Boltzmann machines, sigmoid belief networks, or any combination (i.e., chain graphs) within the same framework. The accuracy of the methods is verified experimentally.
Theory.   Explanation: The paper is focused on proving a lower bound on the number of examples needed for distribution-free learning of a concept class, which falls under the category of theoretical analysis of machine learning algorithms. The paper does not discuss any specific implementation or application of AI, such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning. Rule learning is somewhat related, as the concept class being studied can be seen as a set of rules, but the paper does not explicitly frame it in that way.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of Self-Organizing Maps (SOMs), which are a type of neural network, for data exploration. The authors explain how SOMs can be used to visualize high-dimensional data and identify patterns within it.   Probabilistic Methods: The paper also discusses the use of probability distributions in SOMs. The authors explain how SOMs can be trained using a probabilistic approach, where each neuron in the network represents a probability distribution over the input data. This allows for the identification of clusters and outliers within the data.
Neural Networks.   Explanation: The paper discusses Tau Net, a neural network for modeling dynamic signals, and its application to speech. The network uses a combination of prediction, recurrence, and time-delay connections to handle temporal variability in the signal. The paper also compares the performance of Tau Nets with and without time constants on speaker-independent tasks of vowel and consonant recognition using speech data. Therefore, the paper primarily belongs to the sub-category of Neural Networks in AI.
Neural Networks, Rule Learning.   Neural Networks: The paper focuses on a specific neural network architecture and learning algorithm, bp-som, and discusses the regularities observed in the hidden-unit activations.  Rule Learning: The paper also discusses how the som part of the bp-som network can be used for automatic rule extraction.
Neural Networks.   Explanation: The paper compares the discrimination powers of Multilayer perceptron (MLP) and Learning Vector Quantisation (LVQ) networks for overlapping Gaussian distributions. It discusses the efficiency of MLP network in handling high dimensional problems due to its sigmoidal form of the transfer function and the use of hyper-planes. The paper also analyzes the learning curves of both algorithms and compares them to theoretical predictions. All of these aspects are related to neural networks, making it the most relevant sub-category of AI for this paper.
Theory.   Explanation: The paper presents a theoretical result, a generalization of Sauer's Lemma, which is a fundamental result in computational learning theory. The paper does not discuss any specific AI techniques or applications, and does not involve any empirical experiments or data analysis. Therefore, it does not belong to any of the other sub-categories listed.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper focuses on the Gibbs sampler, which is a probabilistic method used for sampling from complex distributions. The article discusses various implementational issues related to the Gibbs sampler, such as updating strategy, parameterization, and blocking.   Theory: The paper provides theoretical results to justify the use of a normal approximation of the target distribution to approximate the rate of convergence of the Gibbs sampler. The article also discusses the mathematical properties of the Gibbs sampler and its convergence properties.
Case Based, Reinforcement Learning  Explanation:  This paper belongs to the sub-category of Case Based AI because it discusses instance-based learning methods, which are a type of case-based reasoning. The paper also belongs to the sub-category of Reinforcement Learning because it discusses the advantages of instance-based methods for autonomous systems, which often use reinforcement learning techniques.
Probabilistic Methods.   Explanation: The paper discusses model-based cluster analysis, which is a probabilistic method for clustering data. The authors use a mixture model approach, which involves modeling the data as a mixture of underlying probability distributions, and then using statistical methods to estimate the parameters of these distributions. This approach is a common method for clustering data in a probabilistic way, and is discussed in detail throughout the paper.
Reinforcement Learning, Neural Networks, Rule Learning.   Reinforcement Learning is the primary sub-category of AI in this paper, as the authors have implemented a reinforcement learning architecture as the reactive component of their control system for a simulated race car. They have also tested the tuning, decomposition, and coordination of low-level behaviors using reinforcement learning.   Neural Networks are also present in the paper, as the authors used separate networks for each behavior in their control system.   Rule Learning is also present, as the authors used a simple rule-based coordination mechanism in their control system.
Genetic Algorithms, Rule Learning.   Genetic Algorithms: The paper discusses a case study of data preprocessing for a hybrid genetic algorithm, which suggests that irrelevant features can be eliminated to improve the efficiency of learning.   Rule Learning: The paper discusses cost-sensitive feature elimination, which can be effective for reducing costs of induced hypotheses. This is a technique commonly used in rule learning, where the goal is to induce a set of rules that accurately classify instances in a dataset.
Genetic Algorithms.   Explanation: The paper discusses the use of hierarchical genetic programming (HGP) approaches to accelerate evolution through the discovery, modification, and use of new functions. It analyzes the evolution process from the perspectives of diversity and causality, and demonstrates how HGP increases the exploratory ability of the genetic search process through higher diversity and hierarchical exploitation of useful structures. The paper does not discuss any other sub-categories of AI such as Case Based, Neural Networks, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Neural Networks.   Explanation: The paper describes the construction of neural architectures for sequence processing and compares them to conventional training algorithms for recurrent nets. The focus is on improving the performance of neural networks for learning complex, extended sequences.
Neural Networks, Theory.   Neural Networks: The paper presents a self-organizing model of the primary visual cortex that simulates the behavior of neurons in response to visual stimuli. The model is based on a neural network architecture that incorporates lateral connections and feedback mechanisms, which are known to play a crucial role in visual processing. The authors use the model to investigate the phenomenon of tilt aftereffects, which are observed when the perception of the orientation of a stimulus is biased by prior exposure to stimuli with a different orientation.   Theory: The paper proposes a theoretical framework for understanding the neural mechanisms underlying tilt aftereffects. The authors use the self-organizing model to test different hypotheses about the role of lateral connections and feedback in generating these effects. They also compare the model's predictions to experimental data from human subjects, providing a theoretical explanation for the observed phenomena. Overall, the paper contributes to the development of a theoretical understanding of how the brain processes visual information.
Theory.   Explanation: The paper proposes a numerical technique, called the singular limit method, which is derived from analysis of relaxation oscillations in the singular limit. The method is evaluated by computer experiments and produces remarkable speedup compared to other methods of integrating these systems. The paper does not involve any application of AI sub-categories such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Neural Networks.   Explanation: The paper describes a model that integrates two separate lines of research on computational modeling of the visual cortex, both of which use neural networks to simulate the behavior of neurons in the brain. The model combines a laterally connected self-organizing map with spiking neurons with leaky integrator synapses to achieve both self-organization and segmentation in a unified network. Therefore, the paper belongs to the sub-category of AI known as Neural Networks.
Probabilistic Methods.   Explanation: The paper discusses the use of Bayesian methods and Markov Chain Monte Carlo (MCMC) methods for neural network prediction problems. It also introduces the use of Gaussian processes to approximate weight space integrals analytically, which is a probabilistic method.
Probabilistic Methods.   Explanation: The paper discusses algorithms for exact simulation of the stationary distribution of certain finite and infinite state space Markov chains, which are probabilistic models. The Coupling from the Past (CFTP) algorithm and Fill's rejection sampling algorithm are both probabilistic methods used for perfect sampling of these models. The paper also mentions the use of Gibbs sampling, which is a probabilistic method commonly used in Bayesian inference.
Neural Networks. The paper describes simulations of a neural network model of the primary visual cortex and how it self-organizes to develop receptive fields of different sizes and lateral connections. The model is based on Hebbian learning, which is a type of neural network learning rule.
This paper belongs to the sub-category of AI known as Reinforcement Learning. This is evident from the text where it is mentioned that "A Reinforcement Learning method is selected, which is able to adapt a controller such that a cost function is optimised. An estimate of the cost function is learned by a neural `critic'." The paper describes the use of a neural network to learn an optimal controller for the attitude control of a satellite, which is a classic example of Reinforcement Learning.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper proposes the use of a genetic algorithm to evolve neural networks. The algorithm is described in detail, including the selection, crossover, and mutation operators. The fitness function used to evaluate the performance of the networks is also discussed.  Neural Networks: The paper focuses on the use of neural networks as the basis for the evolving networks. The authors describe the architecture of the networks used, including the number of layers and nodes, as well as the activation function used. The paper also discusses the training of the networks using backpropagation and the use of the genetic algorithm to evolve the weights and structure of the networks.
Rule Learning, Theory.   The paper discusses the problem of learning logic programs (Prolog clauses) from examples and background knowledge, which falls under the category of Rule Learning. The paper also presents theoretical results on the learnability of certain types of predicates, which falls under the category of Theory.
Probabilistic Methods.   Explanation: The paper discusses the Expectation-Maximization algorithm, which is a probabilistic method commonly used for solving Maximum A Posteriori (MAP) estimation problems. The examples given in the paper also involve probabilistic modeling and inference.
Reinforcement Learning, Probabilistic Methods  The paper belongs to the sub-categories of Reinforcement Learning and Probabilistic Methods.   Reinforcement Learning is present in the paper as the authors propose an adaptive robot control system that uses reinforcement learning to incrementally improve the robot's performance over time. The system uses a reward function to guide the robot's actions and learn from its experiences.  Probabilistic Methods are also present in the paper as the authors use Bayesian inference to model the uncertainty in the robot's environment and update the robot's beliefs about the world based on new sensor data. The authors also use probabilistic models to predict the outcomes of the robot's actions and choose the best course of action based on these predictions.
Neural Networks, Theory.  Neural Networks: The paper discusses the use of Radial Basis Functions (RBF) networks, which are a type of neural network.  Theory: The paper sets the problem in the framework of regularization theory and derives an analytical solution. It also discusses the concept of incorporating prior knowledge in supervised learning techniques.
Neural Networks, Probabilistic Methods, Theory.   Neural Networks: The paper discusses gain-adaptation algorithms based on connectionist learning methods, which are a type of neural network.   Probabilistic Methods: The paper compares the new algorithms to the Kalman filter, which is a probabilistic method for estimating the state of a system based on noisy measurements.   Theory: The paper presents computational results and evaluates the new algorithms with respect to classical methods along three dimensions: asymptotic error, computational complexity, and required prior knowledge about the system. The paper also discusses the theoretical complexity of the new algorithms compared to classical methods.
Rule Learning, Theory.   Rule Learning is the most related sub-category as the paper discusses the use of separate-and-conquer rule learning algorithms for windowing and presents a new windowing algorithm that exploits this property.   Theory is also relevant as the paper discusses the limitations of windowing and proposes a new algorithm to address these limitations. The paper also briefly discusses the problem of noisy data in windowing and presents some preliminary ideas for an extension of the algorithm.
Theory, Rule Learning.   The paper describes a comprehensive approach to automatic theory revision, which falls under the category of Theory in AI. The approach combines explanation attempts for incorrectly classified examples to identify the failing portions of the theory, and uses correlated subsets of examples to inductively generate a correction for each theory fault. This process involves refining and improving the propositional Horn-clause theory, which falls under the category of Rule Learning in AI.
Probabilistic Methods.   Explanation: The paper discusses the use of Markov chain Monte Carlo (MCMC) methods, which are a type of probabilistic method, for sampling from a given density. The paper also introduces and compares different auxiliary variable methods for MCMC, which are all probabilistic in nature. The applications discussed in the paper, such as binary classification and PET reconstruction, also involve probabilistic modeling and inference.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the convergence properties of perturbed Markov chains, which is a probabilistic method used in various fields such as machine learning, statistics, and artificial intelligence.   Theory: The paper presents theoretical results on the convergence properties of perturbed Markov chains, which is a fundamental concept in probability theory and stochastic processes. The authors discuss the conditions under which the perturbed Markov chains converge to a stationary distribution and provide proofs for their results.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses the use of simulated annealing, which is a metaheuristic optimization algorithm inspired by the process of annealing in metallurgy, to solve genetic programming problems. Simulated annealing is a type of genetic algorithm that uses a probabilistic approach to search for optimal solutions.  Theory: The paper investigates the generality of Automatically Defined Functions (ADFs) in solving genetic programming problems. It analyzes the performance of simulated annealing with ADFs as compared to not using ADFs on a suite of even-k-parity problems. The paper also compares the performance of simulated annealing with ADFs to that of standard genetic programming with ADFs on the even-3-parity, even-4-parity, and even-5-parity problems. The analysis provides insights into the effectiveness of ADFs in solving genetic programming problems and the limitations of simulated annealing as an optimization algorithm.
Rule Learning, Theory.   The paper focuses on the learning of decision trees and DNF formulae, which are examples of rule learning. The paper also presents a formalized model of "superfluous-value blocking" which is a theoretical approach to dealing with incomplete data.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper presents an approach to automatic discovery of functions in Genetic Programming. The approach involves analyzing the evolution trace, generalizing blocks to define new functions, and adapting the problem representation on-the-fly. These are all techniques commonly used in Genetic Algorithms.  Theory: The paper discusses the minimum description length principle and its application to justify the feasibility of approaches based on a hierarchy of discovered functions. This is a theoretical concept that is used to support the proposed approach.
Theory.   Explanation: The paper presents a theoretical analysis of a nonconvex model for pattern recognition and proposes a polynomial-time algorithm based on linear programming to solve a related model. The paper does not involve the implementation or application of any specific AI technique such as neural networks or genetic algorithms.
Neural Networks.   Explanation: The paper introduces a novel connectionist unit, which is a type of neural network, based on a mathematical model of entrainment. The network of these units can self-organize temporally structured responses to rhythmic patterns, embodying the perception of metrical structure. The paper discusses the implications of this approach for theories of metrical structure and musical expectancy, which are related to the learning and processing capabilities of neural networks.
Theory.   Explanation: The paper discusses the concept of the Observer's Paradox, which refers to the apparent computational complexity of physical systems when observed by an external agent. The paper presents a theoretical analysis of this paradox and its implications for understanding the nature of computation in physical systems. While the paper does touch on some aspects of probabilistic methods and neural networks, these are not the main focus of the paper and are not discussed in detail. Therefore, the paper primarily belongs to the sub-category of Theory.
Genetic Algorithms.   Explanation: The paper describes the development of a workbench specifically for genetic algorithm research, with a focus on order-based problems. It discusses the use of genetic operators for reproduction, crossover, and mutation, and the comparison of generational and steady-state genetic algorithms. The paper does not mention any other sub-categories of AI such as Case Based, Neural Networks, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Neural Networks, Theory.   Neural Networks: The paper presents a neural network model of the hippocampal episodic memory inspired by Damasio's idea of Convergence Zones. The model consists of a layer of perceptual feature maps and a binding layer.   Theory: The paper analyzes and simulates the convergence-zone episodic memory model and derives a theoretical lower bound for the memory capacity. The paper also shows why the memory encoding areas can be much smaller than the perceptual maps, consist of rather coarse computational units, and be only sparsely connected to the perceptual maps.
Theory.   Explanation: The paper discusses the theory space search component of the POLLYANNA system and the value of empirical learning in separating optimal theories from non-optimal ones. It does not discuss any of the other sub-categories of AI listed.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as the authors propose a multiagent Q-learning method for finding optimal strategies in general-sum stochastic games.   Theory is also a relevant sub-category, as the paper presents a theoretical framework for multiagent reinforcement learning in general-sum stochastic games, and proves the convergence of the proposed algorithm under certain conditions.
Case Based.   Explanation: The paper discusses techniques for accessing and exploiting past experience from corporate memory resources, which is a key aspect of case-based reasoning. The two approaches presented, Negotiated Retrieval and Federated Peer Learning, both involve using past cases to inform current decision-making.
Theory  Explanation: The paper discusses the use of knowledge about cognitive behavior to improve learning from failure in AI systems. It does not focus on any specific sub-category of AI, such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. Instead, it presents a theoretical framework for incorporating self-knowledge into AI systems to improve their ability to diagnose and repair errors. Therefore, the paper belongs to the sub-category of AI known as theory.
Theory.   Explanation: The paper discusses theory revision and theory-guided learning systems, which are focused on integrating inductive learning and background knowledge to produce more accurate theories. The paper does not discuss any of the other sub-categories of AI listed.
Neural Networks.   Explanation: The paper specifically addresses the issue of replicability in experiments based on backpropagation training of multilayer perceptrons, which is a subfield of neural computing. The paper discusses the parameters needed to support maximum replicability, proposes a statistical framework to support replicability, and demonstrates the framework with empirical studies. The paper does not address any other subfield of AI.
This paper belongs to the sub-category of AI known as Reinforcement Learning.   Explanation: The paper proposes an unsupervised neural network for a robot to learn sensori-motor associations with a delayed reward. The robot's task is to learn the "meaning" of pictograms in order to "survive" in a maze. The paper discusses the difficulty of building visual categories dynamically while associating them with movements. The authors propose to use their algorithm on a simulation to test it exhaustively and compare it to an adapted version of the Q-learning algorithm. The paper concludes by showing the limitations of approaches that do not take into account the intrinsic complexity of reasoning based on image recognition. All of these aspects are related to Reinforcement Learning.
Probabilistic Methods.   Explanation: The paper explicitly mentions the use of "data-driven probabilistic inference modeling" for the analysis and synthesis of acoustical instruments. The general inference framework used is Cluster-Weighted Modeling, which is a probabilistic method for modeling complex data distributions.
Probabilistic Methods.   Explanation: The paper discusses the use of probabilistic models for cluster analysis and inference. It specifically mentions the use of mixture models and Bayesian methods for model selection and parameter estimation. The focus on probabilistic modeling and statistical inference makes this paper most closely related to the sub-category of AI known as Probabilistic Methods.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper presents a new algorithm called Structural Regression Trees (SRT) which constructs a tree containing a literal or a conjunction of literals in each node, and assigns a numerical value to each leaf.   Probabilistic Methods are present in the text as the paper integrates the statistical method of regression trees into ILP to predict numerical values from examples and relational and mostly non-determinate background knowledge.
Probabilistic Methods.   Explanation: The paper describes a Bayesian framework for learning mappings in feedforward networks, which involves the use of probabilistic methods to make objective comparisons between solutions, choose regularization terms, measure the effective number of parameters, estimate error bars, and compare with alternative learning models. The Bayesian approach also helps detect poor underlying assumptions in learning models and penalizes over-flexible and over-complex models, embodying Occam's razor. The paper explicitly refers to the Bayesian framework for regularization and model comparison described in a companion paper by MacKay (1991a) and due to Gull and Skilling (Gull, 1989a).
Theory  Explanation: This paper does not belong to any of the sub-categories of AI listed. It presents an architecture for simultaneous multithreading and evaluates its performance, but it does not involve any AI techniques or algorithms. Therefore, the paper belongs to the sub-category of Theory, which encompasses research on the fundamental principles and concepts of computer science and engineering.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper describes an approach called PTR that uses probabilities associated with domain theory elements to track the flow of proof through the theory. This allows for the precise measurement of the role of a clause or literal in allowing or preventing a derivation for a given example.   Theory: The paper is focused on the theory revision problem for propositional domain theories and presents an approach to efficiently locate and repair flawed elements of the theory. The approach is proved to converge to a theory which correctly classifies all examples and is shown experimentally to be fast and accurate even for deep theories.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper evaluates Gaussian processes, which are a type of probabilistic method used for regression. The authors compare the performance of Gaussian processes with other probabilistic methods such as Bayesian linear regression and support vector regression.  Theory: The paper discusses the theoretical background of Gaussian processes and other regression methods. The authors explain the mathematical concepts behind these methods and how they can be applied to non-linear regression problems. They also provide a detailed analysis of the strengths and weaknesses of each method based on their theoretical properties.
Probabilistic Methods.   Explanation: The paper discusses the use of Bayesian methods for mixture analysis, specifically using reversible jump Markov chain Monte Carlo methods to generate a sample from the full joint distribution of all unknown variables. The focus is on probabilistic modeling and inference, which falls under the category of probabilistic methods in AI.
Reinforcement Learning.   Explanation: The paper discusses incremental variants of policy iteration, which is a reinforcement learning algorithm. The acknowledgments section also mentions the support of the U.S. Air Force, which has a strong interest in reinforcement learning for military applications.
Genetic Algorithms.   Explanation: The paper discusses the implementation of specific application routines using genetic algorithms. It also provides examples of two application files that use genetic algorithms. Therefore, the paper belongs to the sub-category of AI known as Genetic Algorithms.
Neural Networks, Theory.   Neural Networks: The paper describes an implementation of active concept learning using a neural network, called an SG-network. The network is trained to selectively sample parts of the input domain based on distribution information received from the environment.  Theory: The paper discusses the theoretical concept of active learning and its advantages over passive learning. It also introduces a formalism for active concept learning called selective sampling. The authors test their implementation on three domains and observe significant improvement in generalization.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper analyzes the effect of eliminating branch operations on branch prediction schemes in existing processors, which involves probabilistic methods for predicting the outcome of a branch.  Reinforcement Learning: The paper studies the effect of predicated execution on branch prediction accuracy, branch penalty, and basic block size, which involves learning from the feedback of the system's performance, a key aspect of reinforcement learning.
Case Based, Rule Learning.   The paper discusses the use of rules and precedents in the classification task, which is a key aspect of case-based reasoning. It also describes how rules can assist in case-based reasoning through case elaboration and term reformulation, which are both related to rule learning.
Reinforcement Learning, Probabilistic Methods.   Reinforcement Learning is the main sub-category of AI that this paper belongs to, as it introduces a model-based average reward Reinforcement Learning method called H-learning and compares it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. The paper also introduces an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function.   Probabilistic Methods are also present in the paper, as the exploration methods studied (random, recency-based, or counter-based exploration) are all probabilistic in nature.
Genetic Algorithms, Fuzzy Logic Techniques.   Explanation: The paper proposes using fuzzy logic techniques to dynamically control parameter settings of genetic algorithms (GAs). It describes the Dynamic Parametric GA, a GA that uses a fuzzy knowledge-based system to control GA parameters. The paper also introduces a technique for automatically designing and tuning the fuzzy knowledge-base system using GAs. Therefore, the paper is primarily focused on Genetic Algorithms and Fuzzy Logic Techniques.
Probabilistic Methods, Neural Networks, Theory.   Probabilistic Methods: The paper describes how the algorithm for learning linear sparse codes can be interpreted within a maximum-likelihood framework, which is a probabilistic approach to modeling data.  Neural Networks: The algorithm for learning linear sparse codes is a type of neural network, specifically a feedforward network with a single hidden layer.  Theory: The paper discusses the theoretical underpinnings of the algorithm and its relationship to statistical independence and other related algorithms. It also suggests how to adapt parameters that were previously fixed, which is a theoretical consideration.
Probabilistic Methods.   Explanation: The paper discusses the computation of rigorous bounds on the marginal probabilities of evidence in layered belief networks of binary random variables, which are probabilistic models. The methods presented in the paper use large deviation theory to compute these bounds, which is a probabilistic method. The paper also discusses generic transfer function parameterizations of the conditional probability tables, such as sigmoid and noisy-OR, which are commonly used in probabilistic models.
Theory.   Explanation: The paper focuses on characterizing the learnability of classes of f0; ng-valued functions, which is a theoretical problem in machine learning. The paper does not discuss any specific AI techniques or algorithms, but rather presents theoretical results and proofs related to the learnability of these functions. Therefore, the paper belongs to the sub-category of AI theory.
Rule Learning, Theory.   Explanation:  The paper belongs to the sub-category of Rule Learning because it discusses the implementation of a sequential feature selection algorithm based on an existing conceptual clustering system. This involves the creation of rules for selecting features that are most relevant to the clustering task.   The paper also belongs to the sub-category of Theory because it discusses the issues raised in feature selection by the absence of class labels and presents a comparison of two different algorithms for feature selection in conceptual clustering. This involves a theoretical analysis of the benefits and drawbacks of each approach.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as it discusses the use of dynamic programming and function approximation in RL approaches. The paper also falls under the category of Theory, as it presents a theoretical result on the performance loss from approximate optimal-value functions.
Neural Networks, Probabilistic Methods, Rule Learning.   Neural Networks: The paper discusses connectionist networks as a possible learning paradigm for vision systems.   Probabilistic Methods: The paper mentions statistical pattern recognition systems as a possible learning paradigm for vision systems.   Rule Learning: The paper briefly analyzes symbol processing systems as a possible learning paradigm for vision systems.
Neural Networks.   Explanation: The paper describes a self-organizing neural network called SARDNET for sequence classification. The network extends the Kohonen Feature Map architecture with activation retention and decay to create unique distributed response patterns for different sequences. The network has been successful in mapping arbitrary sequences of binary and real numbers, as well as phonemic representations of English words. Therefore, the paper belongs to the sub-category of AI called Neural Networks.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the integration of knowledge from multiple sources to construct an integrated knowledge base that can exploit all available knowledge and has good performance. This involves probabilistic methods such as Bayesian networks and probabilistic graphical models to combine the knowledge from different sources.  Theory: The paper presents a methodology for knowledge integration and discusses the results of experiments that show the performance of the integrated theory exceeded the performance of the individual theories. The paper also discusses how knowledge integration can complement other existing ML methods. This involves theoretical concepts such as knowledge representation, reasoning, and learning.
Theory  Explanation: The paper focuses on the theoretical framework of bias selection as search in bias and meta-bias spaces. It does not discuss any specific AI sub-category such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Rule Learning.   Explanation: The paper describes a method for learning decision trees from decision rules generated by an AQ-type learning system. This falls under the sub-category of AI known as Rule Learning, which involves learning rules from data and using them to make decisions or predictions. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Theory.
Neural Networks, Theory.   Neural Networks: The paper discusses Projective Basis Function Networks, which are a type of neural network. It describes the architecture and training of these networks, as well as their applications in pattern recognition and function approximation.  Theory: The paper presents a mathematical analysis of the Projective Basis Function Networks, including their global and local forms, and derives formulas for their weights and biases. It also discusses the relationship between these networks and other types of neural networks, such as radial basis function networks.
Neural Networks, Theory.  Explanation:  - Neural Networks: The paper describes a neural network that was trained to produce reduced memory representations for melodies and how it represented structurally important events more efficiently than others. - Theory: The paper discusses the implications of reductionist theories for mental representations of music and how judgments of structural importance may result from the production of reduced memory representations. It also presents music-theoretic predictions and how they align with the results of the study.
Probabilistic Methods.   Explanation: The paper discusses the use of variational free energy minimization, which is a probabilistic method, to optimize an ensemble of parameter vectors in neural networks. The paper also specifically focuses on the optimization of regularization constants, which is a probabilistic method commonly used in linear regression models.
Probabilistic Methods.   Explanation: The paper discusses the use of a Markov chain Monte Carlo (MCMC) algorithm, which is a probabilistic method for sampling from a given distribution. The self regenerative MCMC algorithm and its adaptation scheme are both probabilistic methods. The paper also compares the performance of the proposed methodology with other available MCMC techniques, further emphasizing its probabilistic nature.
Case Based, Analogical Reasoning.   The paper discusses the Conceptual Analogy approach, which integrates memory organization based on prior experiences and analogical reasoning to support the design process in building engineering. This approach automatically extracts knowledge from prior layouts to support design tasks, determines the similarity of complex case representations in terms of adaptability, and allows for incremental knowledge acquisition and user support. Therefore, it belongs to the sub-category of Case Based AI. Additionally, the paper heavily focuses on the use of analogical reasoning in the Conceptual Analogy approach, making it relevant to the sub-category of Analogical Reasoning AI.
Probabilistic Methods.   Explanation: The paper discusses various dynamic confidence-prediction schemes that gauge the likelihood of branch mispredictions, which is a key aspect of probabilistic methods in AI. The authors use these schemes to determine which paths to execute simultaneously and to improve performance in the face of imperfect branch predictors.
Probabilistic Methods. This paper belongs to the sub-category of Probabilistic Methods in AI. The paper presents algorithms for robustness analysis of Bayesian networks, which are probabilistic graphical models used for probabilistic reasoning and decision-making. The algorithms presented in the paper are based on probabilistic methods such as expected utility, expected value, and variance bounds.
Reinforcement Learning, Neural Networks  The paper belongs to the sub-category of Reinforcement Learning as it describes a control system for a mobile robot that uses a reinforcement learning scheme to find a correct mapping from input (sensor) space to output (steering signal) space. The only feedback to the control system is a binary-valued external reinforcement signal, which indicates whether or not a collision has occurred.  The paper also belongs to the sub-category of Neural Networks as an adaptive quantisation scheme is introduced, through which the discrete division of input space is built up from scratch by the system itself. This involves adding and removing neurons to the neural network as needed to improve the system's performance.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses the extraction of rules from trained feedforward networks, which are a type of neural network.   Rule Learning: The paper proposes a mechanism for evaluating and ordering the rules extracted from the feedforward networks. It also discusses the integration of the extracted rule-based system with the trained network.
Genetic Algorithms, Neural Networks, Reinforcement Learning  The paper discusses the use of coevolutionary algorithms to evolve high-level representations in artificial intelligence systems. Specifically, the authors use genetic algorithms to evolve neural network architectures and reinforcement learning to optimize the performance of these networks. The coevolutionary approach allows for the simultaneous evolution of both the network architecture and the task-specific weights, resulting in more efficient and effective learning. Rule learning and probabilistic methods are not explicitly mentioned in the paper.
Neural Networks, Genetic Algorithms.   Neural Networks: The paper discusses the construction of recurrent neural networks.   Genetic Algorithms: The paper argues that genetic algorithms are inappropriate for network acquisition and proposes an evolutionary program called GNARL that simultaneously acquires both the structure and weights for recurrent networks. The paper also discusses the potential benefits of using an empirical acquisition method that allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper uses a probabilistic approach to model the association between hormones. It uses a bivariate normal distribution to model the joint distribution of the hormones and estimates the parameters using maximum likelihood estimation. The paper also discusses the use of Bayesian methods for model selection and inference.  Theory: The paper presents a theoretical framework for spline smoothing of bivariate data. It discusses the use of penalized likelihood to estimate the smoothing parameters and the choice of the penalty function. The paper also provides theoretical results on the convergence of the spline estimator and the asymptotic properties of the estimator.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper proposes a new mechanism for genetic encoding of neural networks, which allows all aspects of the network structure to be evolved through genetic algorithms.   Neural Networks: The paper focuses on using genetic encoding to evolve neural networks for an object recognition task that requires artificial creatures to develop high-level finite-state exploration and discrimination strategies. The task requires solving the sensory-motor grounding problem, which is a common challenge in neural network research.
Probabilistic Methods.   Explanation: The paper discusses the derivation of Bayesian "confidence intervals" for the components of a multivariate smoothing spline estimate, which is a probabilistic method. The authors also mention the use of multiple smoothing parameters, which is a common feature of Bayesian methods.
Probabilistic Methods, Theory.   The paper belongs to the sub-category of Probabilistic Methods because it discusses soft classification, which is a probabilistic approach to classification. The paper uses penalized log likelihood to estimate the risk of classification and smoothing spline analysis of variance to model the relationship between the predictor variables and the response variable.   The paper also belongs to the sub-category of Theory because it presents a theoretical framework for soft classification and risk estimation. The paper discusses the mathematical basis for penalized log likelihood and smoothing spline analysis of variance and provides proofs for the properties of these methods.
Probabilistic Methods.   The paper presents a new representation for the fluorescent trace data associated with individual base calls, which can be used to improve the quality of assemblies. This representation is based on probabilistic methods, as it involves classifying the trace data into different categories based on their quality and using this information to make decisions about how to improve the assembly process. For example, the authors demonstrate how end-trimming of suboptimal data can result in a significant improvement in the quality of subsequent assemblies.
Theory  Explanation: The paper presents a theoretical investigation into modifying MIMD architectures to extract instruction level parallelism. It does not utilize any specific AI techniques such as neural networks or genetic algorithms.
Theory  Explanation: The paper describes a new architecture for a computer processor, which is a theoretical concept rather than a practical implementation of AI. The paper does not discuss any specific AI algorithms or techniques.
Probabilistic Methods, Reinforcement Learning, Theory.   Probabilistic Methods: The paper discusses two variants of the bridge problem, where transitions can break or be fixed with some probability at each time step. The paper also discusses a priori probabilities of transitions being intact in the deterministic model.   Reinforcement Learning: The paper shows how an agent can act optimally in the bridge problem by reduction to Markov decision processes. The paper also suggests neuro-dynamic programming as a method of value function approximation for these types of models.   Theory: The paper presents a theoretical examination of the bridge problem and its variants, discussing methods of solving them and noting their intractability for reasonably sized problems.
Neural Networks.   Explanation: The paper discusses the use of a two-layer neural network with sigmoid activation functions to classify EEG signals. The implementation of the neural network on a CNAPS server is also mentioned. There is no mention of any other sub-category of AI in the text.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses learning monomials in the presence of malicious noise, which involves probabilistic modeling and analysis of the underlying distribution. The authors also mention that their results apply to a wide class of distributions, indicating a probabilistic approach to the problem.  Theory: The paper presents theoretical results on learning conjunctions with malicious noise, including a formal definition of the problem, a proof of the main theorem, and a discussion of the implications of the results. The authors also provide a detailed analysis of the assumptions and limitations of their approach, which is a key aspect of theoretical research.
Neural Networks, Theory.   Neural Networks: The paper discusses recurrent perceptron classifiers, which are a type of neural network. The paper provides bounds on sample complexity associated with fitting these models to experimental data.   Theory: The paper provides theoretical results on the sample complexity of learning recurrent perceptron mappings. It provides tight bounds on the number of samples required to fit these models to experimental data.
Neural Networks.   Explanation: The paper discusses the use of neural networks for processing time-varying patterns, and presents a taxonomy of neural net architectures for this purpose. The paper also describes experiments using neural networks for predicting future values of a financial time series. There is no mention of any other sub-category of AI in the text.
Neural Networks.   Explanation: The paper describes DISLEX, an artificial neural network model of the mental lexicon. The model is based on unsupervised learning and simulates various impairments similar to those observed in human patients. The paper does not mention any other sub-categories of AI.
Theory, Neural Networks  Explanation:  This paper belongs to the sub-category of Theory as it presents a theoretical framework for understanding synaptic plasticity in the visual cortex. The paper also involves the use of Neural Networks as it discusses the role of synaptic plasticity in shaping the connectivity and function of neural networks in the visual cortex.
This paper belongs to the sub-category of AI called Neural Networks.   Explanation: The paper discusses the use of subsymbolic neural networks for natural language processing. It explains how these networks can be trained to recognize patterns in language data and make predictions based on those patterns. The paper also discusses the advantages and limitations of using neural networks for natural language processing. Therefore, the paper is primarily focused on the use of neural networks in AI.
Neural Networks, Theory.   Neural Networks: The paper discusses the use of neural networks in modeling the neural mechanisms underlying rodent navigation. Specifically, the authors propose a neural network model that incorporates both place cells and grid cells, which are known to play a key role in spatial navigation in rodents. The model is used to simulate various navigation tasks and to investigate the neural mechanisms underlying these tasks.  Theory: The paper also contributes to the development of a computational neuroscience theory of rodent navigation. The authors propose a theoretical framework that integrates various aspects of rodent navigation, including the role of place cells, grid cells, and other neural mechanisms. The paper discusses how this framework can be used to explain various experimental findings and to make predictions about the neural mechanisms underlying navigation in rodents.
Neural Networks.   Explanation: The paper specifically discusses the suitability of "neural nets" as models and controllers for dynamical systems. The entire paper is focused on discussing the use of neural networks in this context and does not mention any other sub-category of AI.
Neural Networks, Reinforcement Learning  Explanation:   This paper belongs to the sub-category of Neural Networks as it describes a learning system that combines an unsupervised learning scheme (Feature Maps) with a nonlinear approximator (Backpropagation) to learn high-dimensional mappings.   It also belongs to the sub-category of Reinforcement Learning as it mentions extensions of the method that give rise to active exploration strategies for autonomous agents facing unknown environments.
Probabilistic Methods.   Explanation: The paper formulates the search for a feature subset as an abstract search problem with probabilistic estimates. The evaluation function used in the search is a random variable, which requires trading off accuracy of estimates for increased state exploration. The paper also discusses how recent feature subset selection algorithms in the machine learning literature fit into this search problem as simple hill climbing approaches.
Genetic Algorithms.   Explanation: The paper discusses the implementation of parallel Genetic Programming (GP) on a MasPar MP-2 computer. GP is a subfield of Genetic Algorithms, which is a type of evolutionary algorithm used for optimization and search problems. The paper specifically focuses on parallelizing the evaluation of S-expressions in GP using a SIMD architecture.
Reinforcement Learning, Theory.   Reinforcement learning is the main topic of the paper, as it discusses algorithms for generating optimal behavior in a sequential decision-making environment. The paper presents a new theorem that can provide a unified analysis of value-function-based reinforcement-learning algorithms, which falls under the category of theory.
Theory  Explanation: The paper discusses the use of path diagrams as a tool for structural equation modeling, which is a statistical method used to test theoretical models. The paper does not discuss any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. Therefore, the paper belongs to the sub-category of AI called Theory.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of Independent Component Analysis (ICA), which is a probabilistic method for blind source separation.   Theory: The paper introduces contextual ICA in the context of hyperspectral data analysis and applies the method to mineral data from synthetically mixed minerals and real image signatures. The paper also discusses the problem of spectrally unmixing materials and how it can be viewed as a specific case of the blind source separation problem.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses partially observable Markov decision processes (POMDPs), which are a probabilistic method for modeling decision or control problems with uncertainty and imperfect observability. The methods proposed for computing bounds on the value function also involve working with the belief space, which is a probabilistic representation of the agent's knowledge about the environment.  Reinforcement Learning: The control problem in POMDPs is formulated as a dynamic optimization problem with a value function that combines costs or rewards from multiple steps. The paper proposes and tests various incremental methods for computing bounds on this value function, which is a key component of reinforcement learning algorithms. The methods include novel versions of grid-based linear interpolation and lower bound methods, as well as a new method for computing an initial upper bound. The quality of the resulting bounds is tested on a maze navigation problem, which is a classic reinforcement learning task.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses the use of Bayesian methods for automatic relevance determination in non-linear regression modeling.   Neural Networks: The paper specifically mentions the use of neural networks as a popular technique for modeling tasks such as predicting building energy loads from environmental input variables. The paper also discusses the limitations of conventional neural networks in handling irrelevant input variables and proposes the use of the Automatic Relevance Determination (ARD) model, which puts a prior over the regression parameters and introduces multiple regularisation constants, one associated with each input.
This paper belongs to the sub-categories of AI: Case Based and Neural Networks.   Case Based: The paper describes the use of a case-based reasoning approach to improve a prototype-based neural network model. The authors propose storing specific instances in a CBR memory system to enhance the classification performance of the neural network.   Neural Networks: The paper primarily focuses on the use of prototype-based incremental neural networks for classification tasks. The authors also propose a co-processing hybrid model that combines the neural network and CBR approaches.
Theory  Explanation: The paper presents a theoretical investigation into modifying MIMD architectures to extract instruction level parallelism, and proposes a new architecture and code scheduling mechanism to achieve this. There is no mention of any specific AI techniques such as neural networks, genetic algorithms, or reinforcement learning.
Neural Networks. This paper belongs to the sub-category of Neural Networks in AI. The paper presents a performance prediction method for MIMD parallel processor systems for neural network simulations. The method is validated by applying it to two popular neural networks, backpropagation and the Kohonen self-organizing feature map. The paper focuses on the performance of parallel neural network simulations, which is a key aspect of neural network research and development.
Probabilistic Methods.   Explanation: The paper outlines how a tree learning algorithm can be derived using Bayesian statistics, which is a probabilistic method. The paper introduces Bayesian techniques for splitting, smoothing, and tree averaging. The splitting rule is similar to Quinlan's information gain, while smoothing and averaging replace pruning. The comparative experiments show that the full Bayesian algorithm can produce more accurate predictions than other approaches.
Genetic Algorithms, Reinforcement Learning, Theory.   Genetic Algorithms: The paper discusses the use of evolutionary algorithms, which are a type of genetic algorithm, in the field of robotics. It describes how robots can be evolved using genetic algorithms to improve their performance and adaptability.  Reinforcement Learning: The paper also discusses the use of reinforcement learning in robotics, which involves training robots to learn from their environment and improve their behavior over time. It describes how reinforcement learning can be used to teach robots to perform complex tasks and adapt to changing conditions.  Theory: The paper also discusses theoretical issues related to evolutionary robotics, such as the need for a better understanding of the relationship between evolution and learning, and the challenges of designing robots that can adapt to a wide range of environments and tasks. It also discusses the potential applications of evolutionary robotics in fields such as space exploration and environmental monitoring.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses the use of POMDPs, which are probabilistic models of sequential decision-making. The authors formulate the problems of learning to sort a vector of numbers and learning decision trees from data as POMDPs.  Reinforcement Learning: The paper uses a general POMDP algorithm to solve the formulated problems. The authors mention the advantage of the POMDP approach in producing principled solutions that integrate physical and information gathering actions.
Rule Learning, Theory.   Rule Learning is present in the text as the system is designed to examine its own reasoning, analyze its reasoning failures, and select appropriate learning strategies in order to learn the required knowledge without overreliance on the programmer.   Theory is present in the text as the paper proposes a general representation and processing framework for introspective reasoning for strategy selection. It also introduces a knowledge structure called a Meta-Explanation Pattern to explain how conclusions are derived and why such conclusions fail.
Probabilistic Methods.   Explanation: The paper describes the use of fuzzy methods to represent uncertainty in the student model, which is a common technique in probabilistic modeling. The ML-Modeler component also generates plausible hypotheses about the student's misconceptions and errors, which involves probabilistic reasoning. While other sub-categories of AI may also be relevant to the project, such as case-based reasoning and neural networks, the use of probabilistic methods is the most prominent and relevant to the text.
Probabilistic Methods, Rule Learning  Explanation:  - Probabilistic Methods: The paper discusses the task of setting parameters for machine learning algorithms, which is a common problem in probabilistic methods. The approach presented in the paper uses empirical evaluation to guide the search for optimal parameter values. - Rule Learning: The paper specifically mentions the use of an inductive concept learning system called Magnus, which is a rule learning algorithm. The approach presented in the paper uses local optimization to select the best model, which is a common technique in rule learning.
Rule Learning.   Explanation: The paper presents a method for learning logic programs, which falls under the sub-category of AI known as Rule Learning. The method does not use explicit negative examples and instead relies on output completeness to implicitly represent negative examples. The paper also discusses two ILP systems, Chillin and IFoil, which incorporate this method and use intensional background knowledge. These are all characteristics of Rule Learning in AI.
Case Based, Reinforcement Learning  Explanation:  This paper belongs to the sub-category of Case Based AI because it discusses instance-based learning methods, which are a type of case-based reasoning. The paper also belongs to the sub-category of Reinforcement Learning because it discusses the advantages of instance-based methods for autonomous systems, which often use reinforcement learning techniques.
Neural Networks, Fuzzy Logic.   Neural Networks: The paper introduces a new agglomerative clustering algorithm that uses fuzzy hyperboxes to represent pattern clusters. This algorithm applies multi-resolution techniques to progressively combine these hyperboxes in a hierarchical manner.   Fuzzy Logic: The paper uses fuzzy hyperboxes to represent pattern clusters. The algorithm applies multi-resolution techniques to progressively combine these hyperboxes in a hierarchical manner.
Rule Learning, Theory.   Explanation:  This paper belongs to the sub-category of Rule Learning because it introduces a technique for partitioning examples using oblique hyperplanes, which is a method for creating decision trees. The paper discusses how this technique can produce smaller but equally accurate decision trees compared to other methods.   It also belongs to the sub-category of Theory because it presents a new approach to decision tree induction and discusses its potential benefits. The paper describes how the algorithm was tested on both real and simulated data and provides evidence of its effectiveness.
Genetic Algorithms, Rule Learning.   Genetic Algorithms: The paper introduces ICET, a new algorithm for cost-sensitive classification that uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm. The fitness function of the genetic algorithm is the average cost of classification when using the decision tree, including both the costs of tests (features, measurements) and the costs of classification errors.  Rule Learning: ICET is a hybrid genetic decision tree induction algorithm that evolves a population of biases for decision tree induction. The biases are used to guide the search for decision trees that minimize the cost of classification. The paper compares ICET with three other algorithms for cost-sensitive classification (EG2, CS-ID3, and IDX) and with C4.5, which classifies without regard to cost. The evaluation is based on five real-world medical datasets, and the performance of the algorithms is measured in terms of their ability to minimize the cost of classification.
Probabilistic Methods, Theory  Probabilistic Methods: The paper discusses the use of probabilistic models to understand musical sound. Specifically, it mentions the use of Bayesian models to infer the parameters of a forward model that can predict the sound produced by a musical instrument.  Theory: The paper also discusses the use of physical models to understand musical sound. These models are based on the underlying physics of the instrument and can be used to simulate the sound produced by the instrument. The paper discusses how these models can be used to understand the relationship between the physical properties of an instrument and the sound it produces.
Neural Networks.   Explanation: The paper focuses on the role of mathematical programming, particularly linear programming, in training neural networks. The entire paper is dedicated to discussing the use of linear programming and unconstrained minimization techniques for training neural networks. The paper also provides a brief description of a system for breast cancer diagnosis that has been in use for the last four years at a major medical facility, which is an application of neural networks. Therefore, the paper belongs to the sub-category of AI known as Neural Networks.
Case Based, Theory  Explanation:  - Case Based: The paper discusses standard case-based reasoning (CBR) systems and proposes to make them more creative. It also investigates the role of cases and CBR in creative problem solving.  - Theory: The paper aims to understand creative processes better and proposes a framework to support more interesting case-based reasoning. It also discusses methodological issues in the study of creativity and the use of CBR as a research paradigm for exploring creativity.
Probabilistic Methods.   Explanation: The paper describes the use of hidden Markov models, which are a type of probabilistic model, for segmenting DNA sequences and characterizing their compositional inhomogeneity. The paper also discusses the likelihood landscape and optimization process of the model, which are key aspects of probabilistic methods.
Neural Networks.   Explanation: The paper discusses weight modifications in traditional neural nets and proposes a new algorithm for a recurrent neural network to improve its own weight matrix. The paper does not mention any other sub-categories of AI such as case-based reasoning, genetic algorithms, probabilistic methods, reinforcement learning, rule learning, or theory.
Neural Networks.   Explanation: The paper discusses the implementation of many-to-many mappings within connectionist models, which are a type of neural network. The paper specifically extends sequential cascaded networks to fit the task of multiassociative memory. While other sub-categories of AI may be relevant to the topic of cognitive modeling, the focus of this paper is on neural networks.
Neural Networks.   Explanation: The paper discusses the development of a neural circuit for visual image stabilization under eye movements. The circuit is modeled using triadic connections that are gated by signals indicating the direction of gaze. The neural model is exposed to sequences of stimuli paired with appropriate eye position signals in simulations. Therefore, the paper primarily focuses on the development of a neural network for visual stabilization.
Rule Learning, Case Based.   Rule Learning is the most related sub-category of AI as the paper uses the C4.5 algorithm to generate decision trees and prediction rules from cases in the CONFMAN database. The paper also discusses how simple rules and decision trees are more reliable and understandable than complex ones.   Case Based is also relevant as the paper uses the CONFMAN database, which contains cases of international conflict management attempts, to train the machine learning algorithm and generate prediction rules.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms: The paper describes an algorithm based on a traditional genetic algorithm (GA) and how it is used to efficiently locate all optima of multimodal problems.   Probabilistic Methods: The paper uses a fitness derating function to depress fitness values in the regions of the problem space where solutions have already been found. This increases the likelihood of discovering a new solution on each iteration, which is a probabilistic approach.
The paper belongs to the sub-categories of AI: Case Based, Reinforcement Learning, Theory.   Case Based: The paper discusses learning from examples, which is a key aspect of case-based reasoning. The authors propose a method for learning from examples that involves the use of agent teams and reflection.   Reinforcement Learning: The paper discusses the use of reinforcement learning in the context of agent teams. The authors propose a method for using reinforcement learning to improve the performance of agent teams.   Theory: The paper presents a theoretical framework for learning from examples and using agent teams. The authors discuss the concept of reflection and how it can be used to improve the performance of agent teams. They also provide a formal definition of the learning problem and discuss the properties of their proposed method.
Reinforcement Learning.   Explanation: The paper discusses the efficiency of TD() algorithm in approximating asynchronous value iteration in large stochastic state spaces requiring function approximation. It also presents a new algorithm for computing an accurate value function in such spaces, and compares its performance with TD() on several domains. All of these are related to Reinforcement Learning, which is a sub-category of AI concerned with learning how to make decisions in an uncertain environment through trial-and-error interactions with the environment.
Symbolic Program Transformation, Theory.   Explanation: The paper describes a system that uses a language for representing optimization strategies and a set of transformations for reformulating those strategies. This involves symbolic program transformation, which is a subfield of AI concerned with manipulating symbolic expressions and programs. The paper also discusses the theoretical underpinnings of their approach and how it fits into a larger research program aimed at automating the strategy formulation process.
Neural Networks.   Explanation: The paper focuses on the classification performance of a neural network for combined Landsat-TM and ERS-1/SAR PRI imagery. The different combinations of data are evaluated using the neural network for learning and verification. Therefore, the paper belongs to the sub-category of AI that deals with neural networks.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses learning Markov chains with variable memory length, which is a probabilistic model. The authors also introduce a modification to the learning algorithm that takes into account the noise structure of the observed output.   Theory: The paper presents theoretical results on the effect of noise on learning Markov chains with variable memory length. The authors show that despite the super-polynomial factors affecting learning, the algorithm is still viable in practical cases. The paper also discusses the original polynomial time learning algorithm introduced by Ron, Singer, and Tishby.
This paper belongs to the sub-category of Genetic Algorithms.   Explanation: The paper is a user guide to the PGAPack Parallel Genetic Algorithm Library, which is a software library for implementing genetic algorithms. The paper explains the concepts and techniques used in genetic algorithms and how they are implemented in the PGAPack library. Therefore, the paper is primarily focused on genetic algorithms and their application in the PGAPack library.
Neural Networks, Reinforcement Learning, Theory.   Neural Networks: The paper describes how the advice provided by users is mapped into neural network implementations of the link- and page-scoring functions.   Reinforcement Learning: The paper mentions how subsequent reinforcements from the Web (e.g., dead links) and any ratings of retrieved pages that the user wishes to provide are used to refine the link- and page-scoring functions.   Theory: The paper presents a theory-refinement approach to building intelligent software agents for Web-based tasks. The approach involves providing approximate advice about the link- and page-scoring functions, which is then mapped into neural network implementations. The subsequent reinforcements from the Web and user ratings are used to refine the functions. The paper also presents a case study and an empirical study to demonstrate the effectiveness of the approach.
Rule Learning, Theory.   The paper discusses the implementation of a concept learning system that can dynamically modify the set of descriptors used to describe instances in a problem domain. This involves the creation and modification of rules that govern how objects are grouped together based on their attributes, which falls under the category of rule learning. The paper also presents a theoretical framework for understanding the importance of being able to accommodate changing contexts in concept learning, which falls under the category of theory.
Probabilistic Methods.   Explanation: The paper discusses Bayesian inference, which is a probabilistic method for modeling data. The method involves integrating over the entire parameter space, which is a key characteristic of probabilistic methods. The paper also discusses mixture distributions, which are a type of probabilistic model. Monte Carlo simulation is used to perform the Bayesian inference, which is a common technique in probabilistic methods.
Genetic Algorithms, Neural Networks, Reinforcement Learning.   Genetic Algorithms: The paper presents a new reinforcement learning method called SANE, which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task.   Neural Networks: SANE evolves a population of neurons to form a neural network capable of performing a task.   Reinforcement Learning: The paper presents SANE as a new reinforcement learning method that forms effective networks faster than other approaches in the inverted pendulum problem.
Probabilistic Methods.   Explanation: The paper discusses the probabilistic evaluation of plans, and establishes a graphical criterion for recognizing when the effects of a given plan can be predicted from passive observations on measured variables only. The paper does not discuss case-based reasoning, genetic algorithms, neural networks, reinforcement learning, rule learning, or theory.
This paper does not belong to any of the sub-categories of AI listed. The paper discusses a technique to enhance the ability of dynamic ILP processors to exploit parallelism, but it does not involve any AI methods such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, rule learning, or theory.
Probabilistic Methods.   Explanation: The paper develops a mean field theory for sigmoid belief networks based on ideas from statistical mechanics, which provides a tractable approximation to the true probability distribution in these networks and yields a lower bound on the likelihood of evidence. This approach is a probabilistic method for modeling and analyzing the behavior of the network.
Probabilistic Methods.   Explanation: The paper presents a method called "palo" that uses statistical techniques to estimate an unknown distribution and determine whether a proposed transformation will improve the performance of a system. This approach is based on a mathematically rigorous form of utility analysis, which is a key aspect of probabilistic methods in AI.
Reinforcement Learning, Rule Learning.   Reinforcement learning is the main focus of the paper, as the authors are interested in scaling up machine learning methods, especially reinforcement learning, for autonomous robot control. They use the Compositional Q-Learning (CQ-L) architecture to acquire skills for performing composite tasks with a simulated two-linked manipulator. They also mention the use of Q-Learning in planning tasks, using a classifier system to encode the necessary condition-action rules.  Rule learning is also mentioned in the context of incorporating domain knowledge into reinforcement learning agents. The authors discuss the use of a classifier system to encode condition-action rules and the incorporation of domain knowledge to restrict the size of the state-action space, leading to faster learning.
Neural Networks, Theory.   Neural Networks: The paper discusses the dynamics of decision hyperplanes in a feed-forward neural network and how it relates to the adaptation process. It also explains learning deadlocks and escaping from certain local minima in the context of neural networks.  Theory: The paper presents a theoretical model of the adaptation process in neural networks using the dynamics of decision hyperplanes. It also introduces the concept of network plasticity as a dynamic property of the system and explains how hyper-plane inertia can be used to avoid destructive relearning in trained networks.
Neural Networks.   Explanation: The paper presents modifications to Recursive Auto-Associative Memory (RAAM), which is a type of neural network. The modifications aim to improve the ability of RAAM to store deeper and more complex data structures. The resulting system is then tested on a data set using RAAM.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper describes a new boosting algorithm called RankBoost for combining preferences. Boosting is a probabilistic method that combines multiple weak classifiers to create a strong classifier.   Theory: The paper gives a formal framework for the problem of combining preferences and analyzes the RankBoost algorithm. The paper also discusses two experiments carried out to assess the performance of RankBoost.
Case Based, Rule Learning  Explanation:  - Case Based: The paper discusses the use of case-based learning (CBL) systems and compares different approaches to solving a specific task in natural language processing using CBL.  - Rule Learning: The paper proposes using decision trees to improve the performance of CBL systems, which involves learning rules for selecting relevant cases based on the features of the problem. The hybrid approach described in the paper combines decision trees and CBL to create a rule-based system for case retrieval.
Probabilistic Methods.   Explanation: The paper describes a probabilistic model, specifically a factor analysis model, for modeling correlations between real-valued visible variables using one or more real-valued hidden variables. The parameters of the model are learned using the wake-sleep method, which is a probabilistic learning algorithm. The paper argues that this approach is a plausible alternative to Hebbian learning as a model of activity-dependent cortical plasticity. There is no mention of any other sub-category of AI in the text.
Probabilistic Methods.   Explanation: The paper introduces a Bayesian method for estimating amino acid distributions in the states of a hidden Markov model (HMM) for a protein family or the columns of a multiple alignment of that family. The method uses Dirichlet mixture densities as priors over amino acid distributions, which are determined from examination of previously constructed HMMs or multiple alignments. The paper discusses how this Bayesian method can improve the quality of HMMs produced from small training sets and reports specific experiments on the EF-hand motif, showing that these priors produce HMMs with higher likelihood on unseen data and fewer false positives and false negatives in a database search task.
Probabilistic Methods, Theory  The paper proposes a cost model for machine learning applications based on the notion of net present value, which is a probabilistic method. The model extends and unifies previous models used in the field, which is related to theory. The paper also mentions that under this model, the "no free lunch" theorems of learning theory no longer apply.
Probabilistic Methods.   Explanation: The paper discusses the use of directed acyclic graphs (DAGs) as a graphical representation of conditional independence assumptions and causal relationships in statistical analysis. It also introduces the manipulative account of causation, which uses DAGs to quantify the effects of external interventions on probability distributions. These concepts are all related to probabilistic methods in AI.
Probabilistic Methods, Neural Networks  The paper belongs to the sub-category of Probabilistic Methods as it discusses the use of the Expectation-Maximization (EM) algorithm, which is a probabilistic method used for estimating parameters in statistical models. The paper also uses soft vector quantization, which is a probabilistic method for clustering data.  The paper also belongs to the sub-category of Neural Networks as it discusses the use of a neural network for soft vector quantization. The neural network is used to learn the parameters of the soft vector quantization model.
Case Based, Genetic Algorithms  Explanation:  This paper belongs to the sub-category of Case Based AI because it focuses on the adaptation of design cases to new design requirements. It also belongs to the sub-category of Genetic Algorithms because it uses an evolving representation to restructure the search space and create new designs. The evolving representation is similar to the genetic representation used in genetic algorithms, where the search space is explored by generating new solutions through the manipulation of existing ones.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses a mixture-based structure for the non-linear model, which allows for efficient estimation algorithms and large sample properties of the estimators. Theoretical issues are also backed by prediction results for benchmark time series and computer generated data sets.   Neural Networks: The non-linear model proposed in the paper is demonstrated to be sufficiently rich in approximating unknown functional forms, yet it retains some of the simple and intuitive characteristics of linear models. The architecture of the model is also emphasized, and a comparison to some more established non-linear models is made. Inference pertaining to the data structure is also made from the parameterization of the model, resulting in a better understanding of the structure and performance of the model.
Theory  Explanation: This paper is focused on theoretical analysis of learning algorithms and their coverage, rather than practical implementation or application of specific AI techniques. While the paper does mention specific algorithms such as ID3 and FRINGE, it is primarily concerned with theoretical upper bounds on coverage and the design of algorithms to approach these bounds. Therefore, the paper does not fit neatly into any of the other sub-categories listed.
Probabilistic Methods.   Explanation: The paper discusses the use of Markov models, which are probabilistic models, to analyze the behavior of genetic algorithms. The paper explores different orderings of states and lumping techniques to reduce the size of the state space, which are all probabilistic methods used in the context of Markov models.
Genetic Algorithms, Theory.   Genetic Algorithms are present in the text as the paper proposes a computational technique called evolving representations of design genes, which is a form of genetic algorithm. The co-evolutionary model of design also involves the evolution of a solution space in response to a problem space, which is a common feature of genetic algorithms.   Theory is also present in the text as the paper discusses the concept of emergence and its mechanism, drawing on insights from the Artificial Life research community. The paper also proposes a hypothesis about identifying emergent behaviour using the co-evolutionary design approach.
Theory. This paper belongs to the sub-category of AI theory as it focuses on proving learnability results in the PAC model for classes of functions in the presence of noisy and incomplete data. The authors define a new complexity measure on statistical query learning algorithms and show that a restricted view SQ algorithm for a class is a general sufficient condition for learnability in both the models of attribute noise and covered (or missing) attributes. They also give lower bounds on the number of examples required for learning in the presence of attribute noise or covering. The paper does not discuss any other sub-categories of AI such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods.   Explanation: The paper describes the use of a Hidden Markov Model (HMM) system for segmenting genomic DNA sequences into exons, introns, and intergenic regions. HMMs are a type of probabilistic model commonly used in bioinformatics for sequence analysis. The paper discusses the design and training of separate HMM modules for specific regions of DNA, and the integration of these modules into a biologically feasible topology. The resulting HMM system, called VEIL, is tested on a set of eukaryotic DNA sequences and achieves high accuracy in gene structure prediction. Therefore, this paper belongs to the sub-category of Probabilistic Methods in AI.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses the need for methods that can reason about the relatedness of individual learning tasks, which suggests a probabilistic approach to clustering tasks based on their similarity.   Reinforcement Learning: The paper describes the task-clustering (TC) algorithm, which is a form of reinforcement learning that selects the most related task cluster and exploits information selectively from this cluster only.
Probabilistic Methods.   Explanation: The paper discusses belief revision and belief update, which are both probabilistic methods for belief change. The authors propose a model for generalized update that combines aspects of both revision and update, and this model is also probabilistic in nature. The paper does not discuss any other sub-categories of AI.
Probabilistic Methods  Explanation: The paper proposes a Bayesian framework for regression problems and derives an online learning algorithm that solves regression problems with a Kalman filter. The paper also discusses the issues of prior selection and over-fitting in the context of Bayesian regression. These are all examples of probabilistic methods in AI.
Neural Networks.   Explanation: The paper deals with the efficient mapping of sparse neural networks on CNS-1, and develops parallel vector code for an idealized sparse network. The focus is on evaluating the performance of memory systems for neural networks and identifying bottlenecks in the current CNS-1 design. There is no mention of any other sub-category of AI in the text.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper describes simulations comparing the behavior of haploid and diploid populations of ecological neural networks, which is a classic application of genetic algorithms.   Neural Networks: The paper specifically focuses on diploid genotypes for ecological neural networks and compares their behavior to haploid genotypes. The simulations and results are based on the performance of these neural networks in fixed and changing environments.
Reinforcement Learning, Neural Networks  The paper belongs to the sub-categories of Reinforcement Learning and Neural Networks. Reinforcement Learning is present in the paper as the authors propose a hybrid model for learning sequential decision making that combines Reinforcement Learning with a Neural Network. The model is trained using Reinforcement Learning algorithms to optimize the policy, and the Neural Network is used to approximate the value function. Neural Networks are also present in the paper as the authors use a feedforward Neural Network to approximate the value function.
Probabilistic Methods.   Explanation: The paper discusses approaches to belief change, which involves reasoning about uncertain or probabilistic information. The authors analyze the ontology or scenario underlying belief change and highlight methodological problems related to modeling the agent's epistemic state and the status of observations. These issues are central to probabilistic reasoning and Bayesian inference, which are key components of probabilistic methods in AI.
Probabilistic Methods.   Explanation: The paper applies ideas from probability theory in a qualitative setting to define a novel approach to belief change. Specifically, the paper uses a qualitative Markov assumption to model state transitions as independent, which is a probabilistic concept. The paper also mentions a recent approach to modeling qualitative uncertainty using plausibility measures, which is another probabilistic method.
Reinforcement Learning, Probabilistic Methods.   Reinforcement Learning is the main focus of the paper, as the authors describe methods for improving solutions to control problems using traditional reinforcement learning techniques. The paper also mentions the use of probabilistic methods in the form of online search algorithms, such as uniform-cost search and A* search, to improve the efficiency of the learning process.
Probabilistic Methods.   Explanation: The paper is focused on generalized queries on probabilistic context-free grammars, which are probabilistic models used in natural language processing and other areas. The authors discuss various algorithms and techniques for working with these models, including the inside-outside algorithm and the expectation-maximization algorithm. The paper also includes experimental results demonstrating the effectiveness of these methods. While the paper does not explicitly discuss other sub-categories of AI, it is primarily concerned with probabilistic modeling and inference, which falls under the umbrella of probabilistic methods.
Probabilistic Methods.   Explanation: The paper presents a formalism that combines logic and probabilities, and uses qualitative versions of Jeffrey's Rule and Bayesian updating for belief revision. The rules are interpreted as order-of-magnitude approximations of conditional probabilities, and inferences are supported by a unique priority ordering on rules. The paper also discusses causal modeling and how it can be facilitated by imposing Markovian conditions that constrain world rankings. All of these aspects are related to probabilistic methods in AI.
Probabilistic Methods.   Explanation: The paper uses smoothing spline ANOVA, which is a probabilistic method that models the relationship between risk factors and incidence of a disease. The method involves estimating the probability distribution of the response variable given the predictor variables. The paper also discusses the use of Bayesian methods for model selection and inference.
Reinforcement Learning.   Explanation: The paper describes a system called Clay that integrates motor schema-based control and reinforcement learning. The coordination modules in Clay use reinforcement learning to activate specific assemblages based on the presently perceived situation, and learning occurs as the robot selects assemblages and samples a reinforcement signal over time. Therefore, the paper belongs to the sub-category of Reinforcement Learning in AI.
Neural Networks, Theory.   Neural Networks: The paper discusses the role of cortical synchronization in perception, which is a concept that is often studied using neural network models. The authors also mention previous studies that have used neural network models to investigate perceptual framing.  Theory: The paper presents a theoretical framework for understanding how cortical synchronization contributes to perceptual framing. The authors propose a model that integrates previous theories of cortical synchronization and perceptual processing, and they use this model to generate predictions about how perceptual framing should be affected by changes in cortical synchronization.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper proposes a learning algorithm for dynamic neural networks. It discusses the limitations of existing learning algorithms for neural networks and proposes a new approach that is inspired by biological systems with many hidden units. The proposed algorithm is a feedforward or recurrent neural network that is designed to deal with hidden units and units whose past activations are hidden in time.  Reinforcement Learning: The paper discusses the credit assignment problem in learning algorithms for dynamic neural networks in non-stationary environments. It proposes a parallel on-line learning algorithm that performs local computations only and is designed to deal with hidden units. The approach is inspired by Holland's idea of the bucket brigade for classifier systems, which is a form of reinforcement learning. The proposed algorithm is a feedforward or recurrent neural network that is consuming weight-substance and permanently trying to distribute this substance onto its connections in an appropriate way.
Theory.   Explanation: The paper discusses a theoretical approach to creative understanding that makes use of a principled ontology to provide reasonable bounding for the manipulation of known concepts in order to understand novel ones. The paper does not discuss any specific AI techniques or algorithms such as case-based reasoning, neural networks, or reinforcement learning.
Case Based, Theory  The paper proposes a computational model based on ideas from reconstructive dynamic memory and situation assessment in case-based reasoning, which falls under the category of Case Based AI. The paper also discusses the concept of serendipitous recognition in the context of creative mechanical design, which is a theoretical exploration of AI.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper focuses on the influence of different methods for estimating probabilities on attribute selection measures for decision tree induction. The experiments conducted in the paper show that different measures obtained by different probability estimation methods determine the preferential order of attributes in a given node, which in turn determines the structure of a constructed decision tree.   Rule Learning: The paper analyzes two well-known measures for attribute selection in decision tree induction, informativity and gini index. These measures are used to determine the preferential order of attributes in a given node, which is then used to construct a decision tree. This process can be seen as a form of rule learning, where the rules are represented by the decision tree.
Theory.   Explanation: The paper presents theoretical models and algorithms for learning in situations where the function used to classify examples may switch back and forth between a small number of different concepts during the course of learning. The paper does not discuss or apply any of the other sub-categories of AI listed in the question.
Case Based, Theory.   The paper belongs to the sub-category of Case Based AI because it focuses on the construction of similarity measures for case base retrieval. Case based systems are a type of AI that use past experiences (cases) to solve new problems. The paper also belongs to the sub-category of Theory because it presents a systematic approach for constructing similarity measures, which can be seen as a theoretical framework for designing case based systems.
Theory.   Explanation: The paper presents a theoretical framework for understanding questions and question asking, rather than focusing on the implementation of specific AI techniques or algorithms. While the paper does draw on insights from cognitive psychology and linguistics, it does not explicitly use any of the other sub-categories of AI listed.
Genetic Algorithms.   Explanation: The paper discusses the use of genetic programming to automatically implement abstract data structures, specifically focusing on evolving a list data structure. The paper describes the GP architecture and techniques for improving the efficiency of the search. The use of genetic programming, which is a type of genetic algorithm, is the main focus of the paper.
Neural Networks.   Explanation: The paper discusses a particular approach to analog computation based on dynamical systems used in neural networks research. The systems have a fixed structure corresponding to an unchanging number of "neurons" and are more powerful than Turing Machines but have limits on their capabilities under polynomial-time constraints. The paper also notes a precise correspondence between nets and standard non-uniform circuits with equivalent resources, which is a common topic in neural network research.
Probabilistic Methods.   Explanation: The paper introduces a convergence diagnostic procedure for Markov Chain Monte Carlo (MCMC) algorithms, which are probabilistic methods used for sampling from complex distributions. The paper discusses how the diagnostic can be applied to two commonly used MCMC samplers, the Gibbs Sampler and the Metropolis Hastings algorithm.
Neural Networks, Probabilistic Methods.   Neural Networks: The Independent Component Analysis (ICA) algorithm used in this paper is a type of neural network that separates the problem of source identification from that of source localization.   Probabilistic Methods: The ICA algorithm is a probabilistic method that assumes statistical independence between sources and maximizes non-Gaussianity to separate them. The paper also discusses tracking nonstationarities in EEG and behavioral state using ICA via changes in the amount of residual correlation between ICA-filtered output channels, which involves probabilistic modeling.
Neural Networks, Reinforcement Learning.   Neural Networks are present in the paper through the references to Schmidhuber (1990b) and Servan-Schreiber et al. (1988), which discuss the use of dynamic neural networks and simple recurrent networks for learning and processing information.   Reinforcement Learning is present in the paper through the mention of "difficult learning control problems" in the title, which suggests that the paper is focused on finding solutions to problems that require reinforcement learning techniques.
Reinforcement Learning, Neural Networks  Explanation:  This paper belongs to the sub-category of Reinforcement Learning because it proposes a neuro-dynamic programming approach to retailer inventory management, which involves using a reinforcement learning algorithm to optimize inventory decisions. The paper also mentions the use of a neural network to model the demand for products, which places it in the sub-category of Neural Networks. The neural network is used to predict future demand based on historical sales data, which is then used as input to the reinforcement learning algorithm.
Rule Learning  Explanation: The paper discusses the standard approach to decision tree induction, which is a form of rule learning. The alternative approach of using lookahead is also within the realm of rule learning. The other sub-categories of AI (Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, Theory) are not directly relevant to the content of the paper.
Neural Networks, Rule Learning.   Neural Networks: The paper presents a neurally inspired competitive classifier (CC) which is used to extract discrete classes from continuous valued input features. This CC is then used in combination with a supervised machine learning model to solve a problem.   Rule Learning: The paper combines the CC with two supervised learning models, ASOCS-AFE and ID3-AFE, which use the discrete classifications and other information to generate feedback and guide the CC into potentially more useful classifications of the continuous valued input features. This feedback loop is an example of rule learning.
Probabilistic Methods, Reinforcement Learning, Theory.  Probabilistic Methods: The paper formulates the problem of merging multiple Markov decision processes (MDPs) into a composite MDP, which is a probabilistic method.  Reinforcement Learning: The paper presents a new dynamic programming algorithm for finding an optimal policy for the composite MDP, which is a type of reinforcement learning.  Theory: The paper analyzes various aspects of the algorithm and presents a theoretically-sound solution for dynamically merging MDPs.
Theory.   Explanation: The paper presents theoretical results and algorithms for the problem of fitting distance matrices by tree metrics, without using any specific AI techniques such as neural networks or reinforcement learning.
Case Based, Explanation-based Learning  Explanation: The paper is primarily focused on Case-Based Planning (CBP), which falls under the sub-category of Case-Based AI. Additionally, the paper discusses the use of Explanation-based Learning (EBL) techniques to improve CBP, which falls under the sub-category of Rule Learning.
Probabilistic Methods, Neural Networks  The paper belongs to the sub-category of Probabilistic Methods because it discusses the use of Bayesian networks to model data and make predictions. The authors also mention the use of Markov Chain Monte Carlo (MCMC) methods for inference.  The paper also belongs to the sub-category of Neural Networks because it discusses the use of deep learning models for data exploration. The authors mention the use of autoencoders and convolutional neural networks (CNNs) for feature extraction and dimensionality reduction. They also discuss the use of recurrent neural networks (RNNs) for time series analysis.
Probabilistic Methods.   Explanation: The paper discusses the use of confidence estimation as a technique for speculation control in modern processors. Confidence estimation is a probabilistic method that involves predicting the likelihood of correct outcomes for data and control decisions, and using this information to determine whether to execute operations speculatively or not. The paper compares different confidence estimation mechanisms and evaluates their performance using detailed pipeline simulations.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses algorithms that use probabilistic methods to address issues involved with using relational representations. For example, it mentions Markov Logic Networks (MLNs) which combine first-order logic with probabilistic graphical models.  Rule Learning: The paper surveys algorithms that embody different approaches to relational learning, including rule-based methods such as Inductive Logic Programming (ILP) and Relational Decision Trees (RDTs). These algorithms learn rules that capture relationships between entities in the data.
Probabilistic Methods, Neural Networks, Theory.   Probabilistic Methods: The paper presents a new approximate learning algorithm for Boltzmann Machines using a systematic expansion of the Gibbs free energy. Boltzmann Machines are a type of probabilistic model.  Neural Networks: Boltzmann Machines are a type of neural network, and the paper presents a learning algorithm for them.  Theory: The paper presents a theoretical approach to improving the learning algorithm for Boltzmann Machines, using mean field theory and linear response correction.
Reinforcement Learning, Probabilistic Methods.   Reinforcement Learning is the main sub-category of AI that this paper belongs to. The paper introduces a methodology for solving combinatorial optimization problems through the application of reinforcement learning methods. The approach involves analyzing a set of "training" problem instances and learning a search control policy for solving new problem instances.   Probabilistic Methods are also present in the paper, as the method based on simulated annealing is mentioned as a non-learning search procedure that is less effective than the learned search control policy. Simulated annealing is a probabilistic method for finding a good approximation to the global optimum of a given function.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as it deals with learning in Markov decision processes with undiscounted rewards. The paper analyzes the learning curve for this type of problem and explores methods for estimating the expected return per unit time.   Theory is also a relevant sub-category, as the paper uses methods from statistical mechanics to calculate lower bounds on the agent's performance in the thermodynamic limit. The paper provides a theoretical analysis of the problem and derives lower bounds on the return of policies based on imperfect statistics.
Theory  Explanation: This paper does not belong to any of the sub-categories of AI listed. It is a study on the design and implementation of predicated execution support for instruction-level parallel processors, which falls under the broader category of computer architecture and optimization.
Theory.   Explanation: The paper primarily focuses on theoretical analysis of the complexity and power of self-directed learning, and does not involve practical implementation or application of any specific AI sub-category such as case-based reasoning, neural networks, etc.
Genetic Algorithms.   Explanation: The paper presents a lower-bound result on the computational power of a genetic algorithm in the context of combinatorial optimization. It describes a new genetic algorithm, the merged genetic algorithm, and proves its efficiency for the class of monotonic functions. The analysis pertains to the ideal behavior of the algorithm, showing convergence of probability distributions over the search space of combinatorial structures to the optimal one. The paper concludes with a discussion of some immediate problems that lie ahead for genetic algorithms.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper uses a single neural network for the second model to predict the residual error of the first model.   Probabilistic Methods: The first model is a mixture of experts that predicts the electricity demand from exogenous variables and can be viewed as a nonlinear regression model of mixture of Gaussians. The paper also analyzes the splitting of the input space generated by the mixture of experts model, which is a probabilistic method.
Reinforcement Learning.   Explanation: The paper discusses Sutton's TD( ) method, which is a reinforcement learning algorithm used to represent cost functions in an absorbing Markov chain with transition costs. The paper also proposes a variation of TD(0) that performs better on the example given. Therefore, the paper belongs to the sub-category of AI known as Reinforcement Learning.
Probabilistic Methods.   Explanation: The paper discusses the use of chain graphs, which are a probabilistic graphical model, for learning. The authors explain how chain graphs can be used to represent conditional independence relationships between variables, and how they can be used for causal inference and prediction. The paper also discusses the use of Bayesian networks, which are another type of probabilistic graphical model, for learning. Overall, the paper focuses on the use of probabilistic methods for learning.
Case Based.   Explanation: The paper is primarily focused on case-based reasoning and how graph-structured representations can be used to support it. The examples provided are from two case-based planning systems, chiron and caper. While the paper does touch on other AI sub-categories such as logic and knowledge representation, they are discussed in the context of supporting case-based reasoning.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the use of a Bayesian approach in the modification of regression trees. This involves incorporating prior knowledge about the distribution of the data and using it to make probabilistic predictions.  Rule Learning: The paper focuses on the TDIDT (Top-Down Induction of Decision Trees) algorithm, which is a rule learning method used to construct decision trees. The modification proposed in the paper involves changing the way the leaves of the tree are constructed, which affects the rules that are generated and the interpretation of the tree.
Neural Networks, Theory.   Neural Networks: The paper discusses the emergence of cortical functionality through self-organization of complex structures, which is a key concept in neural networks. The authors also mention the use of artificial neural networks as a tool for studying cortical functionality.  Theory: The paper presents a general theory of cortical functionality emergence through self-organization, which is supported by quantitative results. The authors discuss the underlying principles and mechanisms of self-organization, as well as the implications of their theory for understanding brain function and developing artificial intelligence.
Theory  Explanation: The paper discusses a methodology for evaluating theory revision systems, which falls under the category of theory in AI. The paper does not discuss any other sub-categories of AI such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the use of decomposition to analyze a given dataset, which involves probabilistic methods such as Bayesian networks and probabilistic graphical models.   Rule Learning: The paper also discusses the use of decomposition to derive a classifier of high classification accuracy, which involves rule learning techniques such as decision trees and association rule mining.
Rule Learning, Theory.   The paper describes a method for generalizing results from case studies, which involves deriving rules that describe when certain algorithms outperform others. This falls under the sub-category of Rule Learning. Additionally, the paper discusses the limitations and advantages of this approach, which involves theoretical considerations. Therefore, the paper also falls under the sub-category of Theory.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the use of information gain, which is a probabilistic method, in the learning process.   Rule Learning: The paper discusses the learning of multiple descriptions for each class, which can be seen as learning rules for classification. The paper also discusses the relevance of attributes and the presence of class noise, which are important considerations in rule learning.
Neural Networks.   Explanation: The paper presents the Plannett system, which combines artificial neural networks to achieve expert-level accuracy on the task of recognizing volcanos in radar images of the surface of the planet Venus. The ANNs vary along two dimensions: the set of input features used to train and the number of hidden units. The ANNs are combined simply by averaging their output activations. Therefore, the paper belongs to the sub-category of AI called Neural Networks.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as the authors propose an approach to planning and learning at multiple levels of temporal abstraction based on the mathematical framework of Markov decision processes and reinforcement learning. The paper extends prior work on temporally abstract models and presents new results in the theory of planning with macro actions.   Theory is also relevant as the paper presents a formal semantics of models of macro actions that guarantees the validity of planning using such models. The paper also discusses the generalization of the classical notion of a macro operator and the need for macro actions to represent common-sense higher-level actions.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses statistical tests for determining whether one learning algorithm outperforms another on a particular learning task. The tests are based on probability and statistical analysis.   Theory: The paper discusses the theoretical aspects of statistical testing and compares the performance of different tests experimentally. It also discusses the concept of Type I error and power in statistical testing.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses a variant of the probably-approximately-correct (PAC) model for learning near-optimal active classifiers.   Theory: The paper's main contribution is defining the framework for learning near-optimal active classifiers, which falls under the category of theoretical research. The paper also discusses the intractability of the task in some cases, which is a theoretical result.
Probabilistic Methods.   Explanation: The paper discusses probabilistic inference algorithms and their reformulation within the bucket elimination framework. The paper also provides complexity bounds for these algorithms.
Probabilistic Methods. This paper belongs to the sub-category of probabilistic methods in AI. Bayesian model selection is a probabilistic method that uses Bayes' theorem to update the probability of a hypothesis based on new evidence. The paper discusses the use of Bayesian model selection in social research, which involves modeling complex social phenomena using probabilistic models. The paper also discusses the advantages of Bayesian model selection over other model selection methods, such as hypothesis testing and model fit criteria.
Probabilistic Methods.   Explanation: The paper proposes a family of algorithms for reasoning in probabilistic and deterministic networks, as well as for optimization tasks. The algorithms combine tree-clustering with conditioning to trade space for time, and the selection of the algorithm that best meets a given time-space specification is based on analyzing the problem structure. Therefore, the paper is primarily focused on probabilistic methods in AI.
Probabilistic Methods.   Explanation: The paper proposes a new approach to probabilistic inference on belief networks, and discusses various existing methods for probabilistic inference. The focus of the paper is on improving the efficiency and effectiveness of probabilistic inference, which is a key aspect of probabilistic methods in AI.
Neural Networks. This paper belongs to the Neural Networks sub-category of AI. The paper investigates the dynamics and collective properties of feedback networks with spiking neurons and their potential computational role in associative memory. The paper shows that model systems with integrate-and-fire neurons can function as associative memories on two distinct levels, where binary patterns are represented by the spike activity and analog patterns are encoded in the relative firing times between individual spikes or between spikes and an underlying subthreshold oscillation. The paper suggests that cortical neurons may perform a broad spectrum of associative computations far beyond the scope of the traditional firing-rate picture.
Genetic Algorithms.   Explanation: The paper discusses a new method for maintaining diversity in a standard generational evolutionary algorithm by creating subpopulations based on tag bits. The paper specifically mentions "standard generational evolutionary algorithm" and "tag bits," which are both key components of genetic algorithms. The other sub-categories of AI are not mentioned in the text.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper proposes a statistical theory for object representation based on local features. The theory involves modeling the distribution of local features using probability density functions and using these distributions to represent objects. The authors also discuss the use of probabilistic models for feature selection and dimensionality reduction.  Theory: The paper presents a general statistical theory for object representation based on local features. The theory involves modeling the distribution of local features using probability density functions and using these distributions to represent objects. The authors also discuss the theoretical properties of the proposed method, such as its ability to handle occlusion and partial object matching.
Neural Networks, Probabilistic Methods, Theory.   Neural Networks: The paper discusses pattern recognition machines, which are a type of neural network. The paper also discusses enhancing the training data, which is a common technique used in neural network training.  Probabilistic Methods: The paper discusses regularization, which is a common technique used in probabilistic methods to prevent overfitting.  Theory: The paper discusses the relationship between two approaches to achieving invariance in machine learning, and provides a theoretical explanation for how the regularized cost function approximates the result of adding transformed examples to the training data.
Probabilistic Methods.   Explanation: The paper proposes a new method for exact Bayesian network inference, which involves factorizing a joint probability into a set of conditional probabilities. The method utilizes a notion of causal independence to further factorize the conditional probabilities and obtain a finer-grain factorization of the joint probability. The paper also presents an algorithm for Bayesian network inference that uses this factorization to find the posterior distribution of a query variable given evidence. Empirical studies are conducted to evaluate the effectiveness of the method on medical diagnosis networks. All of these aspects are related to probabilistic methods in AI.
Reinforcement Learning, Action Models.   Reinforcement learning is present in the text as one of the methods for control knowledge acquisition that is compared to the action models approach. The paper compares the performance of these two methods with human learning on the NRL Navigation task.   Action models are also present in the text as a novel variant of control knowledge acquisition that is compared to reinforcement learning. The paper's results indicate that the performance of the action models approach more closely approximates the rate of human learning on the task than does reinforcement learning or the hybrid. The paper also explores the impact of background knowledge on system performance by adding knowledge used by the action models system to the benchmark reinforcement learner, elevating its performance above that of the action models system.
Theory.   Explanation: This paper focuses on the theoretical aspects of noise-tolerant learning algorithms in the PAC model, specifically using the statistical query learning model as a tool. The paper discusses the complexity of statistical query algorithms and their simulations in the presence of noise, and proposes improvements and new variants of the model. The paper also provides general upper bounds on learning with statistical queries and PAC simulation. While some other sub-categories of AI may be relevant to the implementation or application of these theoretical results, the primary focus of the paper is on theory.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper discusses the use of Reduced Error Pruning in relational learning algorithms, which are a type of rule learning algorithm.   Probabilistic Methods are also present in the text as the paper proposes a new method, Incremental Reduced Error Pruning, which attempts to address the problems with Reduced Error Pruning. This new method involves using probabilistic methods to incrementally update the decision tree.
Neural Networks.   Explanation: The paper focuses on the use of PREENS' neural network simulation programs and provides a tutorial on how to use them. While other sub-categories of AI may be indirectly related to the use of PREENS, the primary focus of the paper is on neural networks.
Neural Networks.   Explanation: The paper describes the development of a network of coupled oscillators that both produces and perceives metrical patterns of pulses, and learns to prefer 3-beat patterns over 2-beat patterns. This is a clear example of a neural network, which is a sub-category of AI that is inspired by the structure and function of the human brain.
This paper belongs to the sub-category of AI known as Neural Networks.   Explanation: The title of the paper explicitly mentions "Neural Networks" and the abstract describes the proposal as focusing on "Knowledge Integration and Rule Extraction in Neural Networks." While other sub-categories of AI may be mentioned or utilized in the research, the primary focus and methodology appears to be centered around neural networks.
Probabilistic Methods.   Explanation: The paper proposes a model of abduction based on the revision of the epistemic state of an agent, which involves reasoning about beliefs and their probabilities. The model generates explanations that nonmonotonically predict an observation, which is a probabilistic approach to abduction. The paper also discusses the preference ordering on explanations, which is defined in terms of normality or plausibility, another probabilistic aspect. Finally, the paper reconstructs two key paradigms for model-based diagnosis, abductive and consistency-based diagnosis, within the proposed framework, which involves probabilistic reasoning.
Probabilistic Methods.   Explanation: The paper discusses the optimal probability of activation for different designs of Sparse Distributed Memory, which involves probabilistic methods. The authors assume that the hard locations, storage addresses, and stored data are randomly chosen, and they consider different levels of random noise in the reading address. There is no mention of any other sub-category of AI in the text.
Probabilistic Methods.   The paper discusses a method for estimating the number of random data vectors stored in a sparse distributed memory with randomly chosen hard locations. The method is based on probabilistic reasoning and provides an unbiased estimate with high accuracy. The coefficient of variation is inversely proportional to p MU, where M is the number of hard locations in the memory and U the length of data. This indicates the use of probabilistic methods in the paper.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper describes a ranked-model semantics for if-then rules admitting exceptions, which incorporates the principle of Markov shielding to impose independence constraints on rankings of interpretations. This approach provides a coherent framework for evidential and causal reasoning, and resolves problems associated with specificity, prediction, and abduction.   Rule Learning: The paper focuses on if-then rules and their priorities, which are automatically extracted from the knowledge base to facilitate the construction and retraction of plausible beliefs. The formalism also offers a natural way of unifying belief revision, belief update, and reasoning about actions.
Genetic Algorithms.   Explanation: The paper explicitly mentions "genetic algorithm approach" in the title and throughout the abstract. The paper discusses how genetic algorithms can be used to solve job-shop scheduling, rescheduling, and open-shop scheduling problems. The other sub-categories of AI are not mentioned in the title or abstract, and there is no discussion of their use in the paper.
Rule Learning, Theory.   Explanation:  The paper by Quinlan focuses on the development of a rule learning algorithm that learns logical definitions from relations. This falls under the sub-category of AI known as Rule Learning, which involves the development of algorithms that can learn rules or decision trees from data.   Additionally, the paper also deals with the development of a first-order theory revision algorithm, which falls under the sub-category of AI known as Theory. Theory deals with the development of algorithms that can reason about and revise logical theories.
Probabilistic Methods.   Explanation: The paper discusses the Expectation-Maximization (EM) algorithm for maximum likelihood learning of finite Gaussian mixtures, which is a probabilistic method for modeling data. The paper also analyzes the convergence properties of the EM algorithm and compares it to other algorithms for learning Gaussian mixture models.
Neural Networks, Theory.  Neural Networks: The paper proposes an adaptive-oscillator model of rhythmic pattern processing that is based on the behavior of neural oscillators. The model is inspired by the way that neurons in the brain synchronize their firing to create rhythmic patterns.  Theory: The paper presents a theoretical framework for understanding how humans perceive time as phase. It discusses the concept of phase in detail and proposes a model that can account for the perception of rhythmic patterns. The paper also discusses the implications of the model for understanding the neural mechanisms underlying rhythmic perception.
Probabilistic Methods.   Explanation: The paper discusses a reference Bayesian test, which is a probabilistic method used for hypothesis testing. The paper also mentions the Schwarz Criterion, which is a probabilistic method used for model selection.
Rule Learning.   Explanation: The paper introduces a first order regression algorithm that combines regressional learning with standard ILP concepts, such as first order concept description and background knowledge, to generate a clause by successively refining the initial clause. The algorithm employs a covering approach (beam search), a heuristic impurity function, and stopping criteria based on local improvement, minimum number of examples, maximum clause length, minimum local improvement, minimum description length, allowed error, and variable depth. The paper presents the results of the system's application in some artificial and real-world domains, and special emphasis is given to the evaluation of obtained models by domain experts and their comments on the aspects of practical use of the induced knowledge.
Neural Networks, Theory.   Neural Networks: The paper discusses a computational model of a bihemispheric cerebral cortex, which is a type of neural network. The algorithms developed for measuring the degree of organization, symmetry, and lateralization in topographic map formation are also based on neural network principles.  Theory: The paper develops a theoretical framework for measuring the degree of hemispheric organization and asymmetry of organization in a bihemispheric cerebral cortex. The measures developed are based on mathematical concepts and principles, such as sigmoid-type error averaging, and are tested for their performance in several topographic maps obtained by self-organization of an initially random network.
Neural Networks, Theory.   Neural Networks: The paper discusses the use of recurrent neural net architectures to learn structure in temporally-extended sequences. It proposes the use of hidden units with different time constants to capture global structure that cannot be learned by standard back propagation.  Theory: The paper presents a theoretical problem of learning structure in temporally-extended sequences and proposes a solution using hidden units with different time constants. It also discusses the limitations of standard back propagation in learning arbitrary contingencies in sequences.
Neural Networks.   Explanation: The paper describes an incremental, higher-order, non-recurrent neural-network for sequence learning. The entire paper is focused on the development and application of this neural network, making it the most related sub-category of AI.
Probabilistic Methods.   Explanation: The paper discusses the use of Markov Chain Monte Carlo (MCMC) algorithms to approximate the posterior distribution and Bayes estimates of parameters in hidden Markov chains. MCMC is a probabilistic method used in Bayesian inference. The paper also proposes on-line controls based on non-parametric tests to evaluate the convergence of the MCMC algorithms.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper investigates the use of a supervised neural network called backpropagation and a nonsupervised, self-organizing feature map for the classification of diffuse liver disease. The conclusion states that neural networks are an attractive alternative to traditional statistical techniques when dealing with medical detection and classification tasks.  Probabilistic Methods: The paper mentions the use of a statistical method, i.e., discriminant analysis, which is a probabilistic method. The investigation was performed on the basis of a previously selected set of acoustic and image texture parameters, which were used to generate additional but independent data with identical statistical properties. The generated data were used for training and test sets, and the final test was made with the original patient data as a validation set. The use of generated data for training the networks and the discriminant classifier has been shown to be justified and profitable.
Neural Networks, Theory.  Explanation:  1. Neural Networks: The paper discusses nonlinear extensions of Principal Component Analysis (PCA) neural networks and their learning rules. It also compares these networks with other signal expansions like Projection Pursuit (PP) and Independent Component Analysis (ICA).  2. Theory: The paper presents theoretical results on the separation of mixtures of real-world signals and images using the nonlinear PCA neural networks. It also relates the networks and their learning rules to other signal expansions like PP and ICA.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the role of neural networks in unsupervised category learning and how they can be used to model human learning. The authors also mention the use of backpropagation, a common neural network training algorithm, in their simulations.  Probabilistic Methods: The paper discusses the use of Bayesian inference in modeling category learning and how it can be used to explain human behavior. The authors also mention the use of probability distributions in their simulations.
Probabilistic Methods.   The paper describes a semi-parametric periodic spline function that can be fit to circadian rhythms. This involves estimating the time and magnitude of the peak or nadir, which requires probabilistic methods to model the uncertainty in the estimates. Additionally, the paper describes tests of fit for components in the model, which also involve probabilistic methods. There is no mention of any other sub-categories of AI in the text.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms are mentioned in the abstract and are a key component of the proposed algorithms. The paper describes the use of "knapsack and genetic approaches to the utilization of 'building blocks' of partial solutions."   Probabilistic Methods are also present in the paper, as the algorithms use lower bounds to guide the search for solutions. The paper mentions "subproblem-coordination paradigm (and lower bounds) of price-directive decomposition methods."
Probabilistic Methods.   Explanation: The paper discusses the use of Bayes and empirical Bayes methods for smoothing crude maps of disease risk, which are probabilistic methods. The authors also mention the need for careful implementation of Markov chain Monte Carlo (MCMC) methods for fitting the models, which is another probabilistic method.
Neural Networks, Probabilistic Methods, Theory.  Neural Networks: The paper presents a novel unsupervised neural network for dimensionality reduction.  Probabilistic Methods: The paper discusses the importance of a dimensionality reduction principle based solely on distinguishing features, which is a probabilistic approach.  Theory: The paper presents a new statistical insight into the synaptic modification equations governing learning in BCM neurons. The paper also discusses the connection between the proposed neural network and exploratory projection pursuit methods, which is a theoretical aspect.
Neural Networks, Rule Learning, Theory.   Neural Networks: The paper discusses the use of neural networks in the experiments to evaluate the effect of input representation on generalization performance.  Rule Learning: The paper also discusses the use of decision trees, which are a type of rule learning algorithm, in the experiments.  Theory: The paper discusses the theoretical concept of the importance of input representation in the accuracy of a learned concept description, and presents experiments to test this theory.
Genetic Algorithms. The paper specifically reviews selection schemes from the field of Genetic Algorithms, as well as Evolution Strategies and Genetic Programming. The paper also emphasizes the role of selection in evolutionary algorithms, which is a key component of Genetic Algorithms.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper discusses self-supervised backpropagation, which is an unsupervised learning procedure for feedforward networks. It also mentions using powerful simulators for backpropagation.  Reinforcement Learning: The paper mentions using a variant of the competitive learning procedure to develop topology-preserving maps. It also discusses a simple extension of the cost function of backpropagation to produce a competitive version of self-supervised backpropagation, which can be used for topographic maps.
Neural Networks, Theory.   Neural Networks: The paper describes a computational model of the perception and production of rhythmic patterns using a network of oscillators that couple with input patterns and with each other. The oscillators whose frequencies match periodicities in the input tend to become activated, which is a characteristic of neural networks.  Theory: The paper presents a theoretical framework for representing rhythmic patterns in a network of oscillators. It discusses the importance of metrical structure in the perception and production of patterns in time and describes how the network represents rests in rhythmic patterns. The paper also makes predictions about the relative difficulty of patterns and the effect of deviations from periodicity in the input.
Theory.   Explanation: This paper presents theoretical results on L p -approximation orders with scattered centres using radial basis functions. It does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Theory.   Explanation: The paper focuses on developing a general tool for extending approximation schemes that use integer translates of a basis function to the non-uniform case. It provides a unified error analysis and improves upon recent results on scattered center approximation. The paper does not involve any application or implementation of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Theory.   Explanation: The paper is focused on deriving an upper bound on the approximation power of principal shift-invariant spaces, which is a theoretical result. The paper does not involve any practical implementation or application of AI techniques such as neural networks, genetic algorithms, or reinforcement learning.
Reinforcement Learning, Explanation-Based Learning.   Reinforcement Learning is the main focus of the paper, as it compares different versions of RL with the newly proposed Explanation-Based Reinforcement Learning (EBRL). The paper also discusses how RL methods involve propagating information backward from the goal toward the starting state, which is similar to the process used in EBL.   Explanation-Based Learning is also a key component of the paper, as it is the basis for the development of EBRL. The paper discusses how EBL computes the weakest preconditions of operators and performs propagation on a region-by-region basis, which is different from RL methods that perform propagation on a state-by-state basis. The paper also compares the performance of EBRL to standard EBL.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses the use of statistical state variables to find representative centers of lower dimensional manifolds that define boundaries between classes in multi-dimensional, multi-class data. This allows for the efficient placement of centers for pattern classification and the determination of the optimal number of centers for clouds of data with space-varying density.  Neural Networks: The paper discusses the use of the k-means algorithm for vector quantization in image segmentation and pattern classification tasks. The introduction of state variables that correspond to certain statistics of the dynamic behavior of the algorithm allows for the efficient finding of class boundaries directly from sparse data and the placement of centers for local Gaussian classifiers.
Neural Networks, Theory  Explanation:  - Neural Networks: The paper discusses the limitations of self-organizing maps (SOM), which are a type of neural network. - Theory: The paper reviews recent empirical findings and relevant theory to discuss the limitations of SOM.
Reinforcement Learning.   Explanation: The paper presents a method for reducing the number of failures during exploration in online reinforcement learning. The method formulates a set of actions for the RL agent to ensure that exploration is conducted in a policy space that excludes most of the unacceptable policies. The paper specifically mentions the domain of motion planning as an example of applying this method in RL. Therefore, the paper belongs to the sub-category of Reinforcement Learning in AI.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of a connectionist network for learning problems and the elimination of unneeded weights in the network.   Probabilistic Methods: The paper proposes a method for identifying and eliminating input variables using nonparametric density estimation and mutual information, which are both probabilistic concepts.
Genetic Algorithms.   Explanation: The paper explicitly discusses the use of genetic algorithms in populations of neural networks, and specifically focuses on the role of diploidy and dominance operators in these algorithms. While neural networks are mentioned, they are not the primary focus of the paper, and the other sub-categories of AI are not mentioned at all.
Theory  Explanation: This paper does not belong to any of the sub-categories of AI listed. It is focused on the theoretical implications of the Multiscalar architecture's features for compiler task selection and does not involve any AI techniques or algorithms.
Reinforcement Learning.   Explanation: The paper introduces the Introspection Approach, which is a method for a learning agent employing reinforcement learning to decide when to ask a training agent for instruction. The paper discusses how this approach improves the learning speed of the agent without reducing the interaction with the trainer. Therefore, the paper is primarily focused on reinforcement learning.
Rule Learning, Theory.   The paper presents a method for feature construction and selection, which is a key aspect of rule learning. The method is based on a non-greedy strategy, which is a theoretical approach to optimization. Therefore, the paper belongs to the sub-categories of Rule Learning and Theory.
Probabilistic Methods.   Explanation: The paper discusses a Bayesian approach for finding latent classes in the data using finite mixture models to describe the underlying structure in the data. This approach involves using full joint probability models for exploratory data analysis. The paper also presents a case study using a data set from an educational study, demonstrating the application of the Bayesian classification approach. Therefore, the paper primarily belongs to the sub-category of Probabilistic Methods in AI.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents a constructive algorithm for the hierarchical mixture of experts (HME) architecture, which is a type of neural network. The HME is viewed as a tree structured classifier, and the paper proposes a likelihood splitting criteria to adaptively grow the tree during training.   Probabilistic Methods: The paper also proposes a method to prune branches away from the tree by considering only the most probable path through the tree. This approach is based on probabilistic methods, as it involves selecting the most likely outcome based on the probabilities assigned to each path in the tree.
Rule Learning.   Explanation: The paper discusses improving shared rules in multiple category domain theories, which is a key aspect of rule learning in AI. The other sub-categories of AI (Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, Theory) are not directly relevant to the content of the paper.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the distinction between random errors or 'noise' and systematic errors, which are both related to probability and statistics. The paper also examines techniques used in AI research for recognizing such errors, which often involve probabilistic methods.  Rule Learning: The paper presents a framework for discussing imperfect data and the resulting problems it may cause, which involves identifying patterns and rules in the data. The task of describing observations in a way that is useful for future problem-solving and learning tasks also involves rule learning.
Genetic Algorithms.   Explanation: The paper examines the structure of the fitness landscape in genetic programming and analyzes a range of problems using genetic algorithms. The paper also discusses measures related to perceived difficulty in genetic programming, which is a key aspect of genetic algorithms.
Theory  Explanation: The paper presents a new method for feature subset selection based on the Minimum Description Length (MDL) principle, which is a theoretical framework for model selection and compression. The paper does not discuss any specific AI algorithms or techniques such as neural networks or reinforcement learning.
Rule Learning. This paper belongs to the sub-category of Rule Learning. The text discusses the use of decision lists, which are ordered lists of conjunctive rules, and how inductive algorithms such as AQ and CN2 learn decision lists incrementally, one rule at a time. The paper also addresses the rule overlap problem and proposes a novel solution to the problem by composing decision lists from homogeneous rules. The focus of the paper is on learning rules and decision lists, which falls under the sub-category of Rule Learning in AI.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the algorithm presented constructs locally independent fuzzy rules from example data to build the fuzzy graphs.   Probabilistic Methods are also present in the text as the resulting fuzzy graphs are based on fuzzy rules that operate solely on selected, important attributes, which allows for the application of these graphs to high dimensional spaces. This implies a probabilistic approach to selecting the most relevant attributes for the model.
Probabilistic Methods.   Explanation: The paper uses Hidden Markov Models (HMMs), which are a type of probabilistic model, to analyze motifs in steroid dehydrogenases and their homologs. HMMs are a statistical model that can be used to model sequences of observations, and they are commonly used in bioinformatics to identify patterns in biological sequences. The paper specifically uses HMMs to identify conserved motifs in steroid dehydrogenases and their homologs, which are important for understanding the function and evolution of these enzymes. Therefore, the paper belongs to the sub-category of Probabilistic Methods in AI.
Reinforcement Learning.   Explanation: The paper describes a methodology for using reinforcement learning to enable an intelligent teaching system to make high level strategy decisions based on low level student modeling information. The paper also discusses the advantages and drawbacks of reinforcement learning, and proposes an off-line learning methodology using sample data and small amounts of expert knowledge to bypass the problem of needing a significant number of trials for learning. There is no mention of any of the other sub-categories of AI listed in the question.
Neural Networks.   Explanation: The paper proposes a neural model for temporal pattern processing, which utilizes leaky integrators in a self-organizing system. The model exhibits compositionality, which is a property commonly associated with neural networks. The other sub-categories of AI (Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, Theory) are not directly relevant to the content of this paper.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper mentions the use of artificial neural networks in combination with principal component analysis and power spectrum estimation to accurately estimate an operator's level of alertness from EEG measures.  Probabilistic Methods: The paper discusses the use of principal component analysis, which is a probabilistic method for reducing the dimensionality of data, in combination with artificial neural networks and power spectrum estimation to estimate an operator's level of alertness.
Probabilistic Methods, Theory  The paper belongs to the sub-category of Probabilistic Methods because it uses statistical models such as Smoothing Spline ANOVA to analyze temperature data. The paper also belongs to the sub-category of Theory because it discusses the mathematical and statistical principles behind the Smoothing Spline ANOVA method and how it can be applied to spatial-temporal analysis of temperature.
Probabilistic Methods.   Explanation: The paper discusses exact and approximate inference algorithms for Bayesian networks, which are a type of probabilistic graphical model. The paper also analyzes the robustness of these algorithms in the context of finitely generated convex sets of distributions.
Genetic Algorithms, Fractals, Theory.   Genetic Algorithms: The paper extensively discusses the use of genetic algorithms in solving complex problems such as the traveling salesman problem and image compression. The authors explain how genetic algorithms work and provide examples of their application in various fields.  Fractals: The paper explores the concept of fractals and their use in modeling complex systems. The authors explain how fractals can be used to generate realistic images and simulate natural phenomena such as weather patterns.  Theory: The paper delves into the theoretical underpinnings of chaos theory, fractals, and genetic algorithms. The authors provide a detailed explanation of the mathematical principles behind these concepts and how they can be applied in real-world scenarios.
Neural Networks.   Explanation: The paper discusses the perceptron, which is a simple biologically inspired model for two-class learning problems. The perceptron is a type of neural network, which is a sub-category of AI that is inspired by the structure and function of the human brain. The paper focuses on the geometry of what a perceptron can learn and different methods of training it, which are key concepts in neural network theory. The practical applications evaluated in the paper also involve the use of neural networks for classification tasks.
Probabilistic Methods, Neural Networks  Probabilistic Methods: The paper proposes a probabilistic modeling approach called Cluster-Weighted Modeling (CWM) for time series prediction and characterization. CWM is a mixture model that assigns each data point to one of several clusters, each with its own probability distribution. The paper discusses the use of Bayesian inference to estimate the model parameters and make predictions.  Neural Networks: The paper also discusses the use of neural networks as a component of the CWM approach. Specifically, the authors propose using a neural network to model the conditional probability distribution of each cluster given the input data. The neural network is trained using backpropagation and stochastic gradient descent. The paper provides experimental results comparing the performance of CWM with and without the neural network component.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses placing a probability distribution on the unknown inputs and maximizing the probability of the data given the parameters. This is a probabilistic approach to training the neural network model.   Neural Networks: The paper defines a neural network model for discovering an underlying latent variable space of lower dimensionality. The model is trained using a probabilistic approach, but it is still a neural network model. The paper also presents preliminary results of applying this model to protein data.
Probabilistic Methods.   Explanation: The paper describes research done within the Center for Biological and Computational Learning, which is focused on developing probabilistic models of learning and inference in biological and artificial systems. The paper also mentions grants from the National Science Foundation and ONR/ARPA, which are both organizations that fund research in probabilistic methods. Additionally, the author mentions being supported by a Postdoctoral Fellowship from the Deutsche Forschungsgemeinschaft and a NSF/CISE Postdoctoral Fellowship, both of which are likely to be related to probabilistic methods given the focus of the research center.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper discusses an alternative technique for evolving graph and network structures via genetic programming, specifically comparing edge encoding to cellular encoding. The paper also mentions the genetic search process and the experimental investigation of the relative merits of these encoding schemes.  Neural Networks: The paper specifically mentions the evolution of recurrent neural networks as one of the problems for which edge encoding may be particularly useful.
Rule Learning, Theory.   Explanation:  - Rule Learning: The paper presents a technique for evaluating classifications by comparing rule sets. Rules are represented as objects in an n-dimensional hyperspace, and the similarity of classes is computed from the overlap of the geometric class descriptions. This is a typical approach in rule learning, where rules are represented as patterns in a feature space and similarity measures are used to compare them.  - Theory: The paper proposes a new method for evaluating classifications based on geometric comparison of rule sets. The authors provide a theoretical framework for the method and explain how it can be applied to different types of classifications generated by different algorithms, with different numbers of classes and different attribute sets. The paper also includes experimental results from a case study in a medical domain, which demonstrate the effectiveness of the proposed method.
Reinforcement Learning, Rule Learning  The paper belongs to the sub-categories of Reinforcement Learning and Rule Learning.   Reinforcement Learning is present in the paper as the `Truth from Trash' model views learning as a process that uses environmental feedback to assemble fortuitous sensory predispositions into useful, information vehicles. This process is similar to the reinforcement learning paradigm where an agent learns to take actions based on feedback from the environment.   Rule Learning is present in the paper as the computer implementation of the `Truth from Trash' model has been used to enhance the strategic abilities of a simulated, football playing mobot. This involves learning rules or strategies based on the feedback received from the environment.
Probabilistic Methods, Theory.   Probabilistic methods are present in the paper through the analysis of the probabilistic version of causal irrelevance. The paper also develops axioms and formal semantics for statements of causal relevance, which falls under the category of theory.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper focuses on three fundamental problems in neural network systems and proposes a system called VISOR that consists of three main components, including a Low-Level Visual Module, a Schema Module, and a Response Module.   Reinforcement Learning: The Response Module in VISOR learns to associate the schema activation patterns with external responses, enabling the external environment to provide reinforcement feedback for the learning of schematic structures.
The paper belongs to the sub-categories of AI: Neural Networks, Reinforcement Learning, Probabilistic Methods.   Neural Networks: The paper discusses the use of neural networks for robot navigation and protein folding. It explains how neural networks can be trained to learn from data and make predictions.  Reinforcement Learning: The paper also discusses the use of reinforcement learning for robot navigation. It explains how the robot can learn to navigate its environment by receiving rewards for successful actions and punishments for unsuccessful actions.  Probabilistic Methods: The paper discusses the use of probabilistic methods for protein folding. It explains how probabilistic models can be used to predict the most likely structure of a protein based on its amino acid sequence.
Probabilistic Methods.   Explanation: The paper presents a framework for characterizing Bayesian classification methods, which are probabilistic methods for classification. The paper discusses the spectrum of allowable dependence in a given probabilistic model, from the Naive Bayes algorithm at the most restrictive end to the learning of full Bayesian networks at the most general extreme. The paper analyzes the assumptions made as one moves along this spectrum and shows the tradeoffs between model accuracy and learning speed. The paper also presents a general induction algorithm that allows for traversal of this spectrum depending on the available computational power for carrying out induction.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper discusses the genetic assimilation of acquired traits over evolutionary time, which is a key concept in genetic algorithms.  Neural Networks: The second model presented in the paper involves the evolution of neural network controllers for a mobile robot, demonstrating the use of neural networks in the context of evolutionary learning.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses the Baldwin Effect, which is a phenomenon in which acquired traits can become genetically specified in later generations, thus speeding up the evolutionary process. This process is similar to the way genetic algorithms work, where solutions are evolved over time through the application of genetic operators such as mutation and crossover.  Theory: The paper presents conditions under which genetic assimilation can take place, and discusses the evolutionary trade-off between the costs and benefits of lifetime adaptation. It also notes the differences between genotypic and phenotypic spaces and the importance of neighbourhood correlation for an acquired characteristic to become genetically specified. These are all theoretical concepts related to the study of evolution and adaptation.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses the use of Probabilistic Option Trees, which are a type of probabilistic method used for classification/regression tasks. The probabilities of following different subtrees are learned by the system, which is a key characteristic of probabilistic methods.  Neural Networks: The paper mentions that Decision Trees are relatively much faster to build as compared to Neural Networks. However, the paper also discusses the use of Probabilistic Option Trees, which can be seen as a combination of Decision Trees and Neural Networks. The system learns the probabilities of following different subtrees, which is similar to how Neural Networks learn the weights of different connections.
Neural Networks, Automata and Dynamical Systems Approaches.   Neural Networks: The paper discusses the use of recurrent neural networks (RNNs) in modeling finite state machines (FSMs) and compares their performance to traditional automata-based approaches. The authors also discuss the use of RNNs in language modeling and speech recognition.  Automata and Dynamical Systems Approaches: The paper also discusses traditional automata-based approaches to modeling FSMs, including deterministic and non-deterministic finite automata, pushdown automata, and Turing machines. The authors also discuss the use of dynamical systems theory in modeling complex systems and the potential for combining RNNs and dynamical systems approaches.
Neural Networks. This paper belongs to the sub-category of Neural Networks. The paper discusses the backpropagation algorithm for training artificial neural networks, which is a fundamental technique in the field of neural networks. The paper presents results for various modifications of the backpropagation algorithm, such as modified BP with a momentum term and BP with weight decay. The paper does not discuss any other sub-categories of AI.
Neural Networks.   Explanation: The paper discusses the use of recurrent neural networks to learn and mimic the behavior of deterministic finite-state automata. The algorithm proposed in the paper involves encoding weights directly into the neural network, and the paper compares its approach to other methods proposed in the literature for constructing DFA in neural networks. The entire paper is focused on the use of neural networks for this specific task and does not discuss any other sub-category of AI.
Theory.   Explanation: The paper presents a theoretical result on Lyapunov-theoretic techniques for nonlinear stability, and does not involve any practical implementation or application of AI methods such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Neural Networks, Rule Learning.   Neural Networks: The paper focuses on the extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training. It discusses the use of recurrent neural networks for classifying strings of a regular language and extracting rules defining the learned grammar.  Rule Learning: The paper specifically deals with the extraction of deterministic finite-state automata (DFA's) from recurrent neural networks, which can be seen as a form of rule learning. The paper also introduces a heuristic for selecting the best DFA among the consistent models extracted from the network.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper discusses the use of a time-delay neural network (TDNN) architecture to improve the performance of job-shop scheduling. The TDNN is used to process irregular-length schedules and is shown to match the performance of a previous hand-engineered system.  Reinforcement Learning: The paper formulates the job-shop scheduling task for solution by the reinforcement learning algorithm T D(). The TD() algorithm is used to learn from experience and improve the quality of the resulting schedules. The paper shows that the TDNN-TD() network significantly outperforms the best previous (non-learning) solution to this problem in terms of the quality of the resulting schedules and the number of search steps required to construct them.
Neural Networks.   Explanation: The paper deals with the simulation of Turing machines by neural networks, which are made up of interconnections of processors that update their states based on previous states. The main result is the simulation of all Turing machines by nets, including the computation of a universal partial-recursive function. The paper also updates previous results to include the simulation of binary-tape machines. There is no mention of any other sub-category of AI in the text.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses the use of probabilistic methods in real-time search algorithms, specifically in the context of domain properties. The authors mention the use of probabilistic models to estimate the likelihood of different outcomes and to guide the search process.  Reinforcement Learning: The paper also discusses the use of reinforcement learning in real-time search algorithms. The authors mention the use of reward functions to guide the search process and the use of learning algorithms to improve the performance of the search algorithm over time.
Theory. This paper belongs to the Theory sub-category of AI. The paper discusses the problem of approximating a function using a linear combination of n translates of a given function and uses a lemma by Jones and Barron to show that it is possible to define function spaces and functions for which the rate of convergence to zero of the error is O( 1 p n ) in any number of dimensions. The paper also describes a constructive iterative procedure that can achieve this rate. The paper does not discuss Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Rule Learning.   Explanation-based learning is a type of rule learning where a specific problem's solution is generalized into a form that can be later used to solve conceptually similar problems. The paper presents an algorithm that generalizes explanation structures to acquire recursive and iterative concepts, which can be applied using a PROLOG-like problem solver. The focus is on generalizing the structure of explanations, which helps avoid negative effects of learning.
Genetic Algorithms, Reinforcement Learning  Genetic Algorithms: The paper discusses how competitive environments can lead to the evolution of better solutions for complex tasks, which is a key concept in genetic algorithms. The idea of natural selection and survival of the fittest is also mentioned, which is a fundamental principle of genetic algorithms.  Reinforcement Learning: The paper discusses how agents can learn from their environment and improve their performance through trial and error, which is a key concept in reinforcement learning. The idea of rewards and punishments is also mentioned, which is a fundamental principle of reinforcement learning.
Probabilistic Methods.   Explanation: The paper discusses a probabilistic model for nonparametric mixtures and describes two Gibbs sampling algorithms for approximating Bayesian inferences in this model. The paper also provides a convergence rate bound for the Markov chains resulting from the Gibbs sampling.
Rule Learning, Theory.   The paper describes a technique for discovering intermediate concepts in learning from examples, which involves decomposing real-valued functions and presenting them in symbolic form. This technique is based on a decomposition method originally developed for the design of switching circuits and recently extended to handle incompletely specified multi-valued functions. The paper also evaluates the method on a number of test functions and shows that the decomposition hierarchy does not depend on a given repertoire of basic functions (background knowledge). This work is primarily focused on developing a theoretical method for constructing intermediate concepts, which falls under the category of Theory in AI. Additionally, the method involves the use of symbolic representations and logical rules, which aligns with the sub-category of Rule Learning.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the use of a highly efficient probabilistic classifier to select examples for training another classifier.   Rule Learning: The paper specifically mentions the use of the C4.5 rule induction program as the classifier being trained. The paper also discusses the use of uncertainty sampling methods, which are commonly used in rule learning.
Probabilistic Methods.   Explanation: The paper discusses the use of probabilistic models to represent causal relationships between variables, and derives a formula for inequality constraints on the observed distribution based on instrumental variables. The paper does not mention any other sub-categories of AI such as case-based reasoning, genetic algorithms, neural networks, reinforcement learning, or rule learning.
Probabilistic Methods.   Explanation: The paper discusses Bayesian confidence intervals, which are a probabilistic method used to estimate uncertainty in smoothing splines. The paper does not mention any other sub-categories of AI such as case-based reasoning, genetic algorithms, neural networks, reinforcement learning, or rule learning.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods: The paper discusses a setting in which hypotheses may assign confidences to each of their predictions. The authors give a specific method for assigning confidences to the predictions of decision trees, which is closely related to one used by Quinlan. This suggests a technique for growing decision trees which turns out to be identical to one proposed by Kearns and Mansour. These methods involve probabilistic reasoning and are therefore related to probabilistic methods in AI.  Rule Learning: The paper describes improvements to Freund and Schapire's AdaBoost algorithm, which is a rule learning algorithm. The authors refine the criterion for training weak hypotheses and give a simplified analysis of AdaBoost in a setting where hypotheses may assign confidences to their predictions. They also give two boosting methods for multiclass classification problems, particularly to the multi-label case in which each example may belong to more than one class. These methods involve learning rules and are therefore related to rule learning in AI.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms are the main focus of the paper, as the title suggests. The paper discusses the learning process on the population level using Genetic Algorithms.   Reinforcement Learning is also mentioned briefly in the abstract as one of the sub-categories of AI, but it is not the main focus of the paper.
Probabilistic Methods.   Explanation: The paper discusses a method for learning Bayesian networks, which are probabilistic graphical models. The approach involves explicitly representing and learning the local structure in the conditional probability distributions (CPDs) that quantify these networks, which is a probabilistic method. The paper also evaluates the proposed learning procedure empirically, which is a common practice in probabilistic methods research.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses a method to bound the test errors of voting committees, which involves using linear programming to infer committee error bounds based on the validation of individual classifiers. This is a probabilistic approach to evaluating the performance of a committee of classifiers.  Theory: The paper presents a theoretical approach to validating voting committees, based on the idea that it is more efficient to validate individual classifiers and use linear programming to infer committee error bounds. The paper also extends the method to infer bounds for classifiers in general, which is a theoretical contribution to the field of machine learning.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper compares the performance of a PI controller with two neural networks - one trained to predict the steady-state output of the PI controller and the other trained to minimize the n-step ahead error between the coil output and the set point.   Reinforcement Learning: The paper also includes a reinforcement learning agent trained to minimize the sum of the squared error over time. The agent is compared with the PI controller and the two neural networks.
Rule Learning.   Explanation: The paper describes improvements to the CN2 algorithm, which is a rule induction algorithm. The paper discusses the use of entropy and the Laplacian error estimate as search heuristics for inducing rules, and also discusses the generation of unordered rules. There is no mention of any of the other sub-categories of AI listed.
Neural Networks, Theory.   Neural Networks: The paper is a book review of "Introduction to the Theory of Neural Computation" and discusses the strengths and weaknesses of connectionist approaches, which are a type of neural network modeling. The book covers a number of neural network models and provides critical analyses and comparisons between them.  Theory: The paper discusses the theoretical perspective of neural computation and establishes links to other disciplines such as statistics and control theory. The book itself is written from the perspective of physics, the home discipline of the authors, and provides a concise introduction to the theory of neural computation.
This paper does not belong to any of the sub-categories of AI listed. It is a computer architecture paper that proposes a new execution model and micro-architecture for superscalar processors to improve performance by executing both paths after different branches. The paper does not discuss any AI techniques or algorithms.
Probabilistic Methods, Theory.   The paper belongs to the sub-category of Probabilistic Methods because it describes a tree learning algorithm that approximates the Bayesian decision theoretic solution to the learning task. The paper also belongs to the sub-category of Theory because it derives the algorithm from first principles and discusses its implications to incremental learning and the use of multiple models.
Rule Learning, Theory.   The paper discusses the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. It examines notions of relevance and irrelevance and presents definitions for these concepts. The paper also describes a method for feature subset selection using cross-validation that is applicable to any induction algorithm. Overall, the paper focuses on the theoretical aspects of feature selection in rule learning.
Rule Learning, Probabilistic Methods.   Rule Learning is the most related sub-category as the paper describes a machine learning method that induces solutions in the form of ordered disjunctive normal form (DNF) decision rules. The central objective of the method is to induce compact, easily interpretable solutions.   Probabilistic Methods are also present as the paper mentions that the new techniques are competitive with existing machine learning and statistical methods and can sometimes yield superior regression performance. This suggests that the method involves some form of probabilistic modeling or inference.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the use of dynamic branch prediction, which is a probabilistic method used to predict the outcome of conditional branches. The confidence information gathered from the dynamic branch prediction state tables is also used to determine whether dual path execution or branch prediction should be used, which is another example of probabilistic reasoning.  Rule Learning: The paper proposes a hybrid branch predictor scheme that uses a limited form of dual path execution along with dynamic branch prediction. The confidence mechanism used to determine whether dual path execution or branch prediction should be used is based on a set of rules that take into account the confidence information gathered from the dynamic branch prediction state tables. Therefore, the paper involves the use of rule learning to improve the performance of the branch predictor.
Probabilistic Methods, Reinforcement Learning  Probabilistic Methods: The paper discusses policies for deciding which branches to fork, which involves probabilistic decision-making.  Reinforcement Learning: The paper describes mechanisms for managing competition between primary and alternate path threads for critical resources, which can be seen as a form of reinforcement learning.
Neural Networks, Probabilistic Methods, Reinforcement Learning, Theory.   Neural Networks: The paper discusses machine learning approaches to concept induction, which often involve neural networks.   Probabilistic Methods: The paper also discusses probabilistic models of concept induction, such as Bayesian models.   Reinforcement Learning: The paper discusses learning sequential behaviors, which can be modeled using reinforcement learning.   Theory: The paper compares the rhetoric in the machine learning and psychological literature and suggests that concrete computational models may be less useful than abstract simulations. The paper also presents an abstract simulation to explain a phenomenon in category learning.
Probabilistic Methods, Theory  Probabilistic Methods: The paper discusses the use of hidden Markov models (HMMs) for homology detection, which is a probabilistic method.  Theory: The paper discusses the generalization of pairwise sequence comparison algorithms to homology detection via family pairwise search, which is a theoretical concept.
Theory  Explanation: The paper discusses a new approach to learning and pattern finding in the context of Knowledge Discovery in Databases (KDD). It also explores the limitations of a Pattern Theoretic approach as applied to KDD. While other sub-categories of AI may be relevant to KDD, such as probabilistic methods or rule learning, the focus of this paper is on the theoretical approach of Pattern Theory.
The paper does not belong to any of the sub-categories of AI listed. It is a guide to multiple alignment in bioinformatics and does not involve any AI techniques.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the article describes a new system, OC1, for induction of oblique decision trees. The system combines deterministic hill-climbing with two forms of randomization to find a good oblique split at each node of a decision tree.   Probabilistic Methods are also present in the text as the article presents extensive empirical studies, using both real and artificial data, that analyze OC1's ability to construct oblique trees that are smaller and more accurate than their axis-parallel counterparts. The benefits of randomization for the construction of oblique decision trees are also examined.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods: The paper discusses the use of a randomized generalized cross-validation (GCV) approach for adaptive tuning of numerical weather prediction models. GCV is a probabilistic method used to estimate the tuning parameters of the model. The paper also discusses the use of Bayesian inference in the data assimilation process.  Rule Learning: The paper discusses the use of a rule-based approach for tuning the numerical weather prediction models. The authors use a set of rules to determine the optimal tuning parameters for the model. The rules are based on the performance of the model on a validation dataset. The authors also discuss the use of a rule-based approach for selecting the optimal ensemble size in the data assimilation process.
Reinforcement Learning, Explanation-Based Learning.   Reinforcement Learning is the main focus of the paper, as the authors extend Explanation-Based Reinforcement Learning to hierarchical domains. The paper also utilizes Explanation-Based Learning to combine the generalization ability of EBL with the ability of RL to learn optimal plans.
Probabilistic Methods, Rule Learning, Theory.   Probabilistic Methods: The paper discusses the need for discretization in many learning paradigms that assume nominal data, which is a probabilistic approach to learning.   Rule Learning: The BRACE paradigm and algorithm presented in the paper involve ranking and classifying boundaries to discretize the data, which is a rule-based approach to learning.   Theory: The paper presents a list of objectives for efficient and effective discretization, which is a theoretical framework for approaching the problem. The paper also discusses the potential for extending the BRACE paradigm to other types of clustering/unsupervised learning, which is a theoretical consideration.
Probabilistic Methods. This paper belongs to the sub-category of Probabilistic Methods in AI. The paper discusses the Bayesian classifier, which is a probabilistic method for classification. The authors propose and evaluate algorithms for detecting dependencies among attributes to improve the accuracy of the Bayesian classifier. The paper also discusses the estimation of probabilities from training data, which is a key aspect of probabilistic methods.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of hidden Markov models, which are a type of probabilistic model, for multiple sequence alignment. The SAM and HMMER methods are both probabilistic methods for generating HMMs.   Theory: The paper describes studies attempting to infer appropriate parameter constraints for the generation of de novo HMMs for various protein sequences. This involves developing theoretical models and testing them against empirical data.
Neural Networks, Theory.  Explanation:  - Neural Networks: The paper discusses the use of recurrent networks as representations for formal language learning and the extraction of finite state machines from their internal state trajectories. - Theory: The paper presents two conditions that can lead to illusionary finite state descriptions, which is a theoretical analysis of the limitations of the extraction methods.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the probability of accurate generalization and how it can be increased by taking into account the probability of the occurrence of functions. It also identifies several conditions that should be considered when selecting an appropriate bias for a particular problem, which involves probabilistic reasoning.   Theory: The paper discusses the theoretical concept of bias in learning algorithms and how it is necessary for generalization. It also presents examples to illustrate the fact that no bias can lead to strictly better generalization than any other when summed over all possible functions or applications. The paper also explains how domain knowledge and an understanding of the conditions under which each learning algorithm performs well can be used to increase the probability of accurate generalization.
Reinforcement Learning, Probabilistic Methods.   Reinforcement Learning is the main sub-category of AI discussed in the paper, as the authors introduce a new model-based reinforcement learning method called H-learning and compare it with three other reinforcement learning methods.   Probabilistic Methods are also present in the paper, as the authors mention that the four methods differ along two dimensions: whether they are model-based or model-free, and whether they optimize discounted total reward or undiscounted average reward. These dimensions involve probabilistic considerations, as the authors explain that the model-based methods use a probabilistic model of the environment, and the discounted total reward criterion involves a discount factor that reflects the probability of future rewards.
Theory.   Explanation: The paper presents a new theorem for robust control analysis and design, which is based on generalizing various aspects of classical theorems. The paper does not discuss any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Genetic Algorithms, Neural Networks.   Genetic algorithms are mentioned in the abstract as the method used to optimize the topology and weights of the neural networks. The paper focuses on the use of simulated robotic agents with neural network processors as part of a method to ensure grounding. The agents' behavior suggests that they were also learning to build cognitive maps, which is a common application of neural networks.
Theory  Explanation: The paper focuses on the problem of correcting imperfect domain theories in Explanation-Based Learning, which is a theoretical approach to machine learning. The paper analyzes past research in the area and proposes the need for a "universal weak method" of domain theory correction, which is a theoretical concept. None of the other sub-categories of AI listed are directly related to the content of the paper.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses finding a maximal a posteriori (MAP) instantiation of Bayesian network variables, which is a probabilistic method.   Neural Networks: The paper presents a method for mapping a given Bayesian network to a massively parallel Boltzmann machine neural network architecture, which is a neural network approach. The paper also discusses using a massively parallel stochastic process on the Boltzmann machine architecture, which is a characteristic of neural networks.
Theory  Explanation: The paper presents a theoretical framework for heuristic routing in large communication networks. It describes the incremental design of a set of heuristic decision functions and carefully derives the properties of such heuristics under a set of simplifying assumptions about the network topology and load dynamics. The paper concludes with a discussion of the relevance of the theoretical results presented in the paper to the design of intelligent autonomous adaptive communication networks and outlines some directions of future research. The paper does not discuss or apply any of the other sub-categories of AI listed.
Probabilistic Methods.   Explanation: The paper describes a probabilistic method for clustering data using principal curves. The authors use a Bayesian approach to model the data and estimate the parameters, and they also discuss the use of mixture models and model selection criteria. There is no mention of any other sub-category of AI in the paper.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper analyzes algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. The performance of the algorithm is measured by the difference between the expected number of mistakes it makes on the bit sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions.   Theory: The paper provides upper and lower bounds on the minimum achievable difference between the expected number of mistakes made by the algorithm and the best expert. The authors also give efficient algorithms that achieve this minimum difference. The paper also discusses the implications of this result on the performance of batch learning algorithms in a PAC setting.
Case Based, Theory.   The paper belongs to the sub-category of Case Based AI because it presents a novel approach to structural similarity assessment and adaptation in case-based reasoning for synthesis. The approach involves representing cases structurally using an algebraic approach and using similarity relations to provide structure preserving case modifications. This approach enables the incorporation of generalization, abstraction, geometrical transformation, and their combinations into Cbr.   The paper also belongs to the sub-category of Theory because it relates the approach to existing theories and provides the foundation for its systematic evaluation and appropriate usage. The representation of a modeled universe of discourse enables theory-based inference of adapted solutions.
Reinforcement Learning.   Explanation: The paper explicitly mentions that the learning agent employs reinforcement learning and is hindered by the sparse and weakly informative feedback from the critic. The approach presented in the paper involves incorporating occasional instruction from an automated training agent to improve the learning process. The experiments conducted in the paper vary the level of interaction between the trainer and the learner and a parameter that controls how the learner incorporates the trainer's actions. All of these aspects are related to reinforcement learning.
Theory.   Explanation: The paper presents an algorithm for improving the accuracy of algorithms for learning binary concepts, based on theoretical ideas and analysis. While the paper touches on topics such as representational power and compression, it does not utilize any of the specific sub-categories of AI listed in the question.
Rule Learning, Case Based.   The paper proposes a model of ratio decidendi as a justification structure consisting of a series of reasoning steps, which can be seen as a rule learning approach. Additionally, the model takes into account the specific facts of a case, which is a characteristic of case-based reasoning.
Neural Networks, Theory.  Explanation:  - Neural Networks: The paper discusses topographic mappings, which are a type of neural network. The paper also mentions several standard methods for preserving neighbourhood relationships in these mappings, which include self-organizing maps (SOMs) and neural gas.  - Theory: The paper focuses on the theoretical aspects of quantifying neighbourhood preservation in topographic mappings. It discusses how neighbourhoods are defined, how a perfectly neighbourhood preserving mapping is defined, and how an objective function for measuring discrepancies from perfect neighbourhood preservation is defined. The paper also introduces a particular measure for topographic distortion, which has the form of a quadratic assignment problem.
Theory.   Explanation: The paper describes and analyzes the PAC (probably approximately correct) model of concept learning, which is a theoretical framework for machine learning. The paper does not discuss any specific implementation or application of AI, such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Reinforcement Learning.   Explanation: The paper describes research investigating behavioral specialization in learning robot teams using reinforcement learning. The agents learn individually to activate particular behavioral assemblages given their current situation and a reward signal. The experiments evaluate the agents in terms of performance, policy convergence, and behavioral diversity. The degree of diversification and the performance of the team depend on the reward structure. Therefore, the paper primarily belongs to the sub-category of Reinforcement Learning in AI.
Neural Networks.   Explanation: The paper discusses the use of product units in neural networks and evaluates different training algorithms for such networks. The focus is on improving the performance of neural networks for Boolean logic function synthesis. There is no mention of other sub-categories of AI such as genetic algorithms, probabilistic methods, reinforcement learning, rule learning, or case-based reasoning.
This paper belongs to the sub-category of AI called Neural Networks. This is evident from the title of the paper, which explicitly mentions "Neural Networks". Additionally, the abstract mentions that the paper presents a "Symbolic Representation of Neural Networks". Therefore, it is clear that the paper is focused on the use and representation of neural networks in AI. No other sub-categories of AI are relevant to this paper.
Reinforcement Learning, Neural Networks, Theory.   Reinforcement Learning is present in the paper as the authors are designing computational architectures for the NRL Navigation task, which requires competent sensorimotor coordination. This task is a classic example of a reinforcement learning problem, where an agent learns to navigate an environment by receiving feedback in the form of rewards or punishments.   Neural Networks are also present in the paper as the authors are designing computational architectures for the NRL Navigation task. Neural networks are a common tool used in machine learning to model complex relationships between inputs and outputs.   Theory is present in the paper as the authors are developing a cognitive model of how humans acquire skills on complex cognitive tasks. This involves developing a theoretical understanding of the underlying cognitive processes involved in skill acquisition.
Probabilistic Methods.   Explanation: The paper proposes additional postulates for belief revision that are sound relative to a qualitative version of probabilistic conditioning. The proposed postulates characterize belief revision as a process that may depend on elements of an epistemic state that are not necessarily captured by a belief set. The paper also establishes a model-based representation theorem which characterizes the proposed postulates and constrains the way in which entrenchment orderings may be transformed under iterated belief revision. These concepts are all related to probabilistic methods in AI.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper presents a two-layer connectionist system that learns and fine-tunes its search strategy.   Reinforcement Learning: The system is applied to a simulated, real-time, balance-control task, which is a classic example of a reinforcement learning problem. The paper also discusses the comparison of one-layer and two-layer networks, showing the importance of discovering new features and enhancing the original representation, which is a key aspect of reinforcement learning.
Reinforcement Learning. This paper belongs to the sub-category of Reinforcement Learning as it discusses the comparison between indirect and direct reinforcement learning methods for an infinite horizon Markov decision problem with unknown state-transition probabilities. The paper also suggests that given a fixed amount of computational power available per control action, it may be better to use a direct reinforcement learning method augmented with indirect techniques than to devote all available resources to a computationally costly indirect method.
Probabilistic Methods.   Explanation: The paper discusses a framework for modeling belief change based on knowledge and plausibility, which are defined in terms of probabilities. The authors also mention the use of prior probabilities and conditioning to update beliefs, which are concepts commonly used in probabilistic methods.
Probabilistic Methods.   Explanation: The paper discusses the use of Markov chain Monte Carlo (MCMC), which is a probabilistic method, for evaluating expectations of functions of interest under a target distribution. The paper also discusses the design of the transition kernel of the chain, which is a key aspect of MCMC, and the concept of Markov chain regeneration, which is a probabilistic method for allowing adaptation to occur infinitely often without disturbing the stationary distribution of the chain.
Probabilistic Methods.   Explanation: The paper describes Bayesian models for interpolation, which are probabilistic methods that use Bayesian inference to estimate the posterior distribution of the interpolant. The authors also discuss the importance of choosing appropriate hyperparameters for the model, which is a common consideration in probabilistic modeling.
Probabilistic Methods, Neural Networks  Probabilistic Methods: The paper discusses the evaluation of different learning algorithms using real industrial and commercial applications. This involves the use of probabilistic methods to handle uncertainty and variability in the data.  Neural Networks: The paper mentions Daimler-Benz introducing applications such as fault diagnosis, letter and digit recognition, credit-scoring, and prediction of the number of registered trucks. These applications likely involve the use of neural networks for pattern recognition and prediction. The paper also discusses shortcomings of the applied ML-algorithms, which may include issues with neural network architectures and training methods.
Reinforcement Learning.   Explanation: The paper explicitly mentions the application of reinforcement learning to the problem of elevator dispatching. The challenges posed by the elevator domain, such as continuous state spaces, nonstationarity, and incomplete observation of the state, are all addressed using RL techniques. The results of the simulation demonstrate the effectiveness of RL in solving this practical optimization problem. None of the other sub-categories of AI are mentioned or implied in the text.
Reinforcement Learning.   Explanation: The paper presents a technique for efficient exploration in partially observable domains using reinforcement learning. The key idea is to keep statistics in the space of possible short-term memories, which is a common approach in reinforcement learning. The paper also presents experimental results in a partially observable maze and a difficult driving task with visual routines, which are typical domains for reinforcement learning.
Reinforcement Learning, Theory.  Reinforcement learning is present in the text as the paper discusses policy iteration in dynamic programming, which is a common technique in reinforcement learning. The paper also discusses the advantages of actions at states, which is a concept often used in reinforcement learning.  Theory is present in the text as the paper discusses the differential consistency conditions that advantages must satisfy, which is a theoretical concept. The paper also proposes a method for policy improvement solely in terms of advantages, which is a theoretical contribution.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses using a standard artificial neural network representation for protein secondary structure prediction and selecting features to augment it.   Rule Learning: The DT-Select approach involves building a decision tree to classify training examples and selecting features based on the tree. This is a form of rule learning.
Genetic Algorithms.   Explanation: The paper is focused on the development of an extension package for experimentation with Coarse-Grained Distributed Genetic Algorithms (DGA). The package was implemented as an extension to the Basic Sugal system, which is primarily intended to be used in the research of Sequential or Serial Genetic Algorithms (SGA). Therefore, the paper is primarily related to Genetic Algorithms.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of a two-layer network of thresholded summation units to support the representation of 3D objects from multiple viewpoints. The network uses unsupervised Hebbian relaxation to learn to recognize the objects and develop compact representations of the input views.  Probabilistic Methods: The paper does not explicitly mention the use of probabilistic methods, but the network's ability to generalize to novel views of the same objects suggests that it is using probabilistic reasoning to some extent. Additionally, the simulated psychophysical experiments suggest that the network's behavior is qualitatively similar to that of human subjects, which is often a goal of probabilistic modeling in AI.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper discusses the use of internal models in multi-layer networks for supervised learning.  Reinforcement Learning: The paper mentions the importance of internal models in adaptive systems and how they can be used to solve problems associated with the "teacher" in supervised learning. The paper also acknowledges support from the Office of Naval Research, which is a major funder of research in reinforcement learning.
Rule Learning.   Explanation: The paper presents an algorithm for incremental induction of decision trees, which falls under the sub-category of Rule Learning in AI. The paper introduces a new tree revision operator called 'slewing' to handle numeric variables and also provides a non-incremental method for finding a decision tree based on a direct metric of a candidate tree. These techniques are all related to the process of learning rules from data, which is the focus of Rule Learning in AI.
Rule Learning, Learning from examples.   Rule Learning is present in the paper as the authors propose a method for learning physical descriptions from functional definitions and examples. They use a set of rules to generate descriptions based on the input data.   Learning from examples is also present in the paper as the authors use a dataset of examples to train their model and improve its accuracy. They discuss the effect of different conceptual representations on the learning process and compare their results to previous work in the field.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of autoencoders, which are a type of neural network, to model the manifolds of images of handwritten digits. The authors also mention using a convolutional neural network (CNN) for classification.  Probabilistic Methods: The paper uses a probabilistic model called a Gaussian mixture model (GMM) to represent the distribution of the latent variables in the autoencoder. The authors also use a probabilistic model called a variational autoencoder (VAE) to generate new images of digits.
Reinforcement Learning, Probabilistic Methods, Theory.   Reinforcement Learning: The paper discusses the Weighted Majority Algorithm, which is a popular algorithm in the field of reinforcement learning. The algorithm is used to make decisions based on feedback received from the environment.   Probabilistic Methods: The paper discusses the use of probabilities in the Weighted Majority Algorithm. The algorithm assigns weights to different options based on their past performance, and these weights are used to make probabilistic decisions.   Theory: The paper presents a theoretical analysis of the Weighted Majority Algorithm, including its convergence properties and its performance under different conditions. The authors also compare the algorithm to other algorithms in the literature and discuss its advantages and disadvantages.
Rule Learning, Case Based  Explanation:  The paper discusses a simple selection strategy for retaining control rules derived from a training problem explanation, which falls under the category of rule learning. The approach is based on selecting the most utile control rules, which is similar to the case-based approach in AI where past experiences are used to solve new problems. Therefore, the paper also belongs to the sub-category of case-based AI.
Reinforcement Learning.   Explanation: The paper describes a new algorithm for learning feasible trajectories to goal regions in high dimensional continuous state-spaces using techniques from game-theory and computational geometry. The algorithm is designed to find feasible paths or trajectories to goal regions in high dimensional spaces and has been tested on various simulated problems. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Rule Learning, or Theory.
Probabilistic Methods.   Explanation: The paper discusses three approaches for computing the predictive distribution of a discrete variable, all of which are based on probabilistic models. The joint probability distribution for the variables is assumed to belong to a set of distributions determined by a set of parametric models. The three approaches considered are based on the maximum a posteriori (MAP) posterior probability, averaging over all the individual models in the model family, and using Rissanen's new definition of stochastic complexity. The experiments performed with the family of Naive Bayes models suggest that the stochastic complexity approach produces the most accurate predictions in the log-score sense. Therefore, the paper belongs to the sub-category of AI known as Probabilistic Methods.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper introduces a Bayesian probability propagation algorithm for case-based reasoning, which is implemented as a feedforward neural network. The approach allows for theoretically sound Bayesian reasoning and replaces heuristic matching with a probability metric.  Neural Networks: The paper introduces a neural network architecture for efficient case-based reasoning, which is used to implement the Bayesian probability propagation algorithm. The parallel architecture of the neural network naturally implements the efficient indexing problem of CBR.
Case Based, Rule Learning  Explanation:  - Case Based: The paper proposes the use of case-based reasoning to explain creative design processes. This falls under the category of Case Based AI, which involves using past experiences to solve new problems. - Rule Learning: The paper discusses how creativity often involves using old solutions in novel ways, which can be seen as a form of rule learning. Rule Learning AI involves learning rules or patterns from data, which can then be applied to new situations.
Neural Networks, Theory.   Neural Networks: The paper discusses the use of neural networks as a tool for modeling language as a dynamical system. The authors describe how neural networks can be used to simulate the behavior of language systems and how they can be trained to learn patterns in language data.   Theory: The paper presents a theoretical framework for understanding language as a dynamical system. The authors draw on concepts from dynamical systems theory, such as attractors and bifurcations, to explain how language systems evolve over time. They also discuss the implications of this framework for understanding language acquisition, language change, and language processing.
Theory. This paper belongs to the Theory sub-category of AI. The paper presents a new general-purpose algorithm for learning classes of [0; 1]-valued functions and proves a general upper bound on the expected absolute error of this algorithm in terms of a scale-sensitive generalization of the Vapnik dimension. The paper also applies this result to obtain new upper bounds on packing numbers and sample complexity of agnostic learning. The paper does not discuss Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Neural Networks. This paper belongs to the sub-category of Neural Networks as it discusses the development of neural network algorithms and models and their combination into a modular structure for incorporation into intelligent systems. The paper specifically presents an architecture for a type of neural expert module called an Authority, which consists of a collection of Minos modules that function like a panel of experts. The expert with the highest confidence is selected, and its answer and confidence quotient are transmitted to other levels in a system hierarchy.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses partially observable Markov decision processes (pomdp's) which are probabilistic models used to model decision problems in which an agent tries to maximize its reward in the face of limited and/or noisy sensor feedback.  Reinforcement Learning: The paper discusses various solution methods for finding optimal behavior in pomdp's, which is a key aspect of reinforcement learning. The paper also suggests methods for scaling to larger and more complicated domains, which is a common challenge in reinforcement learning.
Probabilistic Methods.   Explanation: The paper proposes a new method for constructing Markov chains with a given stationary distribution, which is a probabilistic method. The paper also compares the proposed algorithm with other MCMC techniques, which are also probabilistic methods.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs), which are probabilistic models used to represent uncertain conditions.   Reinforcement Learning: The paper introduces Smooth Partially Observable Value Approximation (SPOVA), a new approximation method that can be combined with reinforcement learning methods. The effectiveness of this combination is also discussed in the paper.
Neural Networks.   Explanation: The paper's title explicitly mentions "Neural Network" as the subject of the search. The abstract also describes the paper as proposing a "parallel search" for neural networks. The paper does not mention any other sub-categories of AI, so it cannot be categorized under any other option.
Neural Networks, Theory  Explanation:  This paper belongs to the sub-category of Neural Networks because it discusses the use of connectionist modeling, which is a type of neural network approach, to study the fast mapping phenomenon. The authors use a neural network model to simulate the cognitive processes involved in fast mapping, which involves quickly learning the meaning of a new word based on limited exposure to it.   This paper also belongs to the sub-category of Theory because it presents a theoretical framework for understanding the fast mapping phenomenon. The authors propose a computational model of fast mapping that is based on the idea that the brain uses statistical learning to infer the meaning of new words based on their context. They also discuss how their model relates to existing theories of language acquisition and cognitive development.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper discusses the Katsuno and Mendelzon (KM) theory of belief update, which is a probabilistic method for revising beliefs about a changing world. The paper also proposes an alternative semantical view of update that incorporates observations into a belief set by explaining them in terms of a set of plausible events and predicting further consequences of those explanations.  Theory: The paper presents an alternative semantical view of belief update and argues that certain assumptions underlying the KM postulates are not always reasonable and restrict our ability to integrate update with other forms of revision when reasoning about action. The paper also discusses the semantics of update and how it relies on information that is not readily available.
Neural Networks. The paper discusses the architecture and learning mechanisms of brain-structured networks for perceptual recognition, which are based on the anatomy, physiology, behavior, and development of the visual system. The paper also presents simulations and results of brain-structured networks that learn to recognize objects through feedback-guided generation and reweighting.
Rule Learning, Theory.   Explanation: The paper describes decision tree induction, which is a type of rule learning. The focus of the paper is on the theoretical and computational aspects of decision tree induction, including efficient tree restructuring. There is no mention of genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or case-based reasoning.
Probabilistic Methods.   Explanation: The paper discusses a logistic regression model with a Gaussian prior distribution over the parameters, and uses variational techniques to obtain a closed form posterior distribution over the parameters given the data. This is a probabilistic approach to modeling and inference. The paper also extends the results to binary belief networks and derives closed form posteriors in the presence of missing values, which are also probabilistic methods.
Probabilistic Methods.   Explanation: The paper discusses mean field methods, which are a type of probabilistic method used for approximating posterior probability distributions in graphical models. The paper also introduces mixture models as a way to improve the accuracy of the mean field approximation.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper presents a neuroevolution system (Enforced Sub-Populations, or ESP) that uses genetic algorithms to solve the difficult 2-D pole balancing problem.   Neural Networks: The ESP system uses recurrent evolutionary networks to solve the problem. The paper also mentions that the classic pole balancing problem is no longer difficult enough to serve as a viable yardstick for measuring the learning efficiency of these systems, implying that neural networks have become more advanced and capable of solving such problems.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper explores learning mechanisms in connectionist networks, which are massively parallel networks of simple computing elements. It discusses how these networks constructively build up network structures that encode information from environmental stimuli at successively higher resolutions as needed for the tasks that the network has to perform.   Probabilistic Methods: The paper discusses biases for efficient learning of spatial, temporal, or spatio-temporal patterns in connectionist networks. It examines how these biases guide the system to focus its efforts at the minimal adequate resolution, ensuring the parsimony of learned representations. The paper also discusses extensions of the basic algorithm for efficient learning using multi-resolution representations of spatial, temporal, or spatio-temporal patterns, which involve probabilistic methods.
Reinforcement Learning.   Explanation: The paper discusses a method for accelerating Q-learning, which is a type of reinforcement learning algorithm. The paper specifically focuses on the Q()-learning variant of Q-learning. The text mentions TD()-methods, which are commonly used in reinforcement learning, and the update complexity of Q()-learning, which is a key aspect of reinforcement learning algorithms. Therefore, this paper belongs to the Reinforcement Learning sub-category of AI.
Neural Networks.   Explanation: The paper discusses learning structures and processes for massively parallel networks of simple computing elements, which are a type of neural network. The paper specifically focuses on generative learning algorithms for these networks, which allow for adaptive determination of the network architecture and connectivity based on experience. The paper also discusses alternative designs and control structures for regulating the internal representations learned by these networks.
Neural Networks.   Explanation: The paper discusses the use of artificial neural networks in the domain of autonomous vehicle navigation and presents a modular neural architecture for autonomous road following. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper introduces a learning system that models data using locally linear experts. Each expert is trained independently and adjusts its receptive field and bias using second order methods.   Probabilistic Methods: The experts cooperate by blending their individual predictions when a query is required. Each expert is trained by minimizing a penalized local cross validation error. The paper also derives asymptotic results for the method.
Reinforcement Learning, Theory.   Reinforcement learning is present in the paper as the model of a non-Bayesian agent who faces a repeated game with incomplete information against Nature is an appropriate tool for modeling general agent-environment interactions. The paper discusses policies for the agent, which is a function that assigns an action to every history of observations and actions. The paper also discusses feedback structures in partially observable processes.   Theory is present in the paper as it discusses the existence of an efficient stochastic policy that ensures that the competitive ratio is obtained at almost all stages with an arbitrarily high probability, where efficiency is measured in terms of rate of convergence. The paper also discusses the maxmin criterion and proves that a deterministic efficient optimal strategy does exist in the imperfect monitoring case under this criterion. Finally, the paper shows that their approach to long-run optimality can be viewed as qualitative, which distinguishes it from previous work in this area.
Probabilistic Methods.   Explanation: The paper describes a probabilistic method for learning axis-aligned rectangles with respect to product distributions from multiple-instance examples in the PAC model. The accuracy of the hypothesis is measured by the probability that it would incorrectly predict whether one of n more points drawn from D was in the rectangle to be learned. The algorithm achieves accuracy with probability 1- in polynomial time.
Rule Learning, Theory.   Explanation:  The paper presents a new machine learning method that induces a definition of the target concept in terms of a hierarchy of intermediate concepts and their definitions. This approach is inspired by the Boolean function decomposition approach to the design of digital circuits, which is a rule-based method. The paper also proposes a suboptimal heuristic algorithm to cope with high time complexity, which is a theoretical aspect of the method. Therefore, the paper belongs to the sub-categories of Rule Learning and Theory in AI.
Probabilistic Methods.   Explanation: The paper discusses the Bayesian approach, which is a probabilistic method, and its application in estimating probabilities and probability distributions in the context of inductive learning. The m-probability and m-distribution estimates are also specifically mentioned as probabilistic methods. The paper does not discuss case-based reasoning, genetic algorithms, neural networks, reinforcement learning, or rule learning.
The paper does not belong to any of the sub-categories of AI listed. The title and abstract do not provide any indication of the specific AI sub-category being discussed.
Probabilistic Methods.   Explanation: The paper discusses learning from incomplete data from two statistical perspectives - the likelihood-based and the Bayesian. The algorithms presented in the paper are based on mixture modeling and make use of the Expectation-Maximization (EM) principle for estimation and coping with missing data. These are all examples of probabilistic methods in AI.
Neural Networks.   Explanation: The paper discusses the implementation of deterministic finite-state automata (DFA) in sparse second-order recurrent neural networks (SORNN) with fault tolerance. The focus is on the neural network implementation, and there is no mention of other sub-categories of AI such as genetic algorithms, probabilistic methods, reinforcement learning, rule learning, or case-based reasoning.
Probabilistic Methods.   Explanation: The paper discusses a model-based clustering approach for detecting features in spatial point processes with clutter. The approach involves fitting a probabilistic model to the data and using it to cluster the points into groups representing different features. The authors also use Bayesian model selection to choose the number of clusters. Therefore, the paper is primarily focused on probabilistic methods for clustering spatial data.
Reinforcement Learning, Theory.   Reinforcement learning is the most related sub-category as the paper deals with the problem of maximizing reward in a sequential decision-making process, which is the core of reinforcement learning. The paper proposes an algorithm that learns to make decisions based on the feedback it receives from the environment.   Theory is also a relevant sub-category as the paper provides a theoretical analysis of the proposed algorithm's performance. The authors prove upper and lower bounds on the expected per-round payoff of the algorithm and compare it to the best possible performance. They also consider a setting with multiple experts and provide a strategy that guarantees expected payoff close to that of the best expert.
Probabilistic Methods.   Explanation: The paper discusses an alternative way of representing Bayesian belief networks using sensitivities and probability distributions, which are probabilistic methods. The paper also proposes a QR matrix representation for the sensitivities and/or conditional probabilities, which is more efficient for computer-based implementations of probabilistic inference. The paper also describes an exact algorithm for probabilistic inference that uses the QR-representation for sensitivities and updates probability distributions of nodes in a network according to messages from the neighbors.
Neural Networks.   Explanation: The paper discusses the design of a supercomputer specifically for training large neural networks, and emphasizes the need for custom hardware that incorporates neural network-specific features. The entire paper is focused on the use of neural networks for AI, and does not discuss any other sub-category.
Probabilistic Methods.   Explanation: The paper describes an active learning method that uses a committee of learners to reduce the number of training examples required for learning. The approach is similar to the Query by Committee framework, where disagreement among the committee members on the predicted label for the input part of the example is used to signal the need for knowing the actual value of the label. The use of a committee of learners implies a probabilistic approach, where the committee members have different opinions on the predicted label, and the disagreement is resolved probabilistically.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of neural networks in probabilistic modelling and ensemble learning. It explains how neural networks can be used to improve the accuracy of predictions by combining multiple models.  Probabilistic Methods: The paper focuses on probabilistic modelling and how it can be improved using ensemble learning. It discusses the use of probability distributions and Bayesian methods in the context of neural networks.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper discusses the use of smoothing spline ANOVA for exponential families, which involves modeling the response variable as a probability distribution. The authors also use Bayesian methods to estimate the smoothing parameters.  Theory: The paper presents a theoretical framework for smoothing spline ANOVA and discusses the properties of the method, such as its ability to capture non-linear relationships and interactions between variables. The authors also provide proofs for some of the theoretical results.
This paper belongs to the sub-category of AI called Probabilistic Methods.   Explanation:  The paper proposes a linear programming-based machine learning approach for cancer diagnosis and prognosis. The approach involves using probabilistic methods to model the relationships between various clinical and genetic factors and the likelihood of a patient having cancer or developing cancer in the future. The authors use logistic regression and Cox proportional hazards models, which are both probabilistic methods, to build their predictive models. Therefore, this paper belongs to the sub-category of AI called Probabilistic Methods.
Rule Learning.   Explanation: The paper discusses two techniques, covering and divide-and-conquer, for top-down induction of logic programs. These techniques are both examples of rule learning, which is a sub-category of AI that involves learning rules or logical expressions from data. The paper compares the two techniques in a logic programming framework and presents experimental results to demonstrate their effectiveness.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the concept of algorithmic probability, which is a probabilistic method for measuring the complexity of a string of data. The paper explains how algorithmic probability can be used to make predictions and generate new data.  Theory: The paper is focused on the development of a new theoretical framework for measuring complexity and making predictions. The authors discuss the philosophical implications of their work and how it relates to other theories of intelligence and complexity. They also provide mathematical proofs and formal definitions to support their arguments.
Reinforcement Learning, Rule Learning  The paper belongs to the sub-category of Reinforcement Learning as it discusses the use of machine learning in the game of checkers, which involves learning through trial and error and receiving rewards or punishments based on the moves made. The paper also belongs to the sub-category of Rule Learning as it discusses the development of a set of rules for playing checkers based on the learned strategies. The authors mention the use of "heuristic rules" and "evaluation functions" to guide the learning process and improve the performance of the program.
Neural Networks, Theory.   Neural Networks: The paper proposes a straightforward translation of the prediction method to an artificial neural network model.   Theory: The paper introduces a new inductive learning method called Recurrence Surface Approximation and discusses its computational results in the field of breast cancer prognosis. The paper also presents a feature selection method within the context of the linear programming generalizer.
Probabilistic Methods.   Explanation: The paper discusses the Minimum Message Length (MML) technique, which is a probabilistic method for Bayesian point estimation. The MML theory is also described as the theory with the highest posterior probability. The paper also outlines how MML is used for statistical parameter estimation and how the MML mixture modelling program, Snob, combines parameter estimation with selection of the number of components. The paper discusses various probability distributions, including Gaussian, discrete multi-state, Poisson, and von Mises circular distributions.
Probabilistic Methods.   Explanation: The paper proposes a random approach to motion planning, which involves generating random configurations of the robot and testing whether they are collision-free. The approach is based on probabilistic methods, specifically Monte Carlo methods, which involve generating random samples to estimate the probability of a certain event occurring. The paper discusses the use of random sampling to generate configurations, the use of collision detection to test for collisions, and the use of probability distributions to guide the sampling process. Therefore, this paper belongs to the sub-category of Probabilistic Methods in AI.
Neural Networks.   Explanation: The paper presents VISIT, a connectionist model of covert visual attention that is biologically plausible and efficient. The model is based on neural networks and uses effective parallel strategies to minimize the number of iterations required. The paper discusses various extensions to the model, including methods for learning the component modules.
Neural Networks, Theory.   Neural Networks: The paper discusses the dynamics of neural networks with excitatory and inhibitory neurons, which are a type of artificial neural network. The Lyapunov function constructed in the paper is used to analyze the stability of fixed points and limit cycles in these networks.  Theory: The paper presents a theoretical analysis of the dynamics of excitatory-inhibitory networks, using a Lyapunov function to derive conditions for global asymptotic stability. The paper also discusses the relationship between the Lyapunov function and optimization theory and classical mechanics.
This paper does not belong to any of the sub-categories of AI listed. It is focused on improving the efficiency of reading the SDM memory and does not involve any AI techniques or methods.
This paper does not belong to any of the sub-categories of AI listed. It appears to be a technical paper related to operations management and does not discuss any AI techniques or methods.
Theory.   Explanation: The paper discusses various theoretical results and techniques for stabilizing nonlinear systems using feedback control. While some of the methods mentioned may involve elements of other sub-categories of AI (such as neural networks or reinforcement learning), the primary focus of the paper is on theoretical analysis and design of control systems, making it most closely related to the Theory sub-category.
Probabilistic Methods.   Explanation: The paper discusses the use of hierarchical selection models, which are a type of statistical model that uses probabilistic methods to estimate the effects of different variables on a given outcome. The authors apply these models to meta-analysis, which involves combining the results of multiple studies to draw more general conclusions. The paper also discusses the use of Bayesian methods for model selection and inference. Overall, the paper is focused on the use of probabilistic methods for modeling and analyzing complex data sets.
Probabilistic Methods.   Explanation: The paper discusses Bayesian inference and the estimation of ratios of normalizing constants for densities, which are both probabilistic concepts. The methods proposed in the paper, such as importance sampling and bridge sampling, are also probabilistic methods commonly used in Bayesian inference.
Case Based, Theory.   The paper discusses a novel algorithm for efficient associative matching of relational structures in large semantic networks, which is specifically relevant to case-based reasoning. This falls under the sub-category of Case Based AI. Additionally, the paper discusses the PARKA system, which is a knowledge representation system, indicating a focus on the theoretical aspects of AI.
Theory.   Explanation: The paper presents a series of sequential learning procedures for different types of pac-learning, and analyzes their expected training sample size. It does not involve any implementation or application of specific AI techniques such as neural networks or reinforcement learning. The focus is on theoretical analysis of the effectiveness of the proposed methods.
Neural Networks.   Explanation: The paper is specifically about the dimension of recurrent neural networks, and discusses various mathematical and computational techniques for analyzing and understanding the behavior of these networks. While other sub-categories of AI may also be relevant to the study of neural networks, this paper focuses exclusively on this area of research.
Genetic Algorithms, Probabilistic Methods, Theory.   Genetic Algorithms: The paper proposes an adaptive global optimization algorithm that uses a genetic algorithm as its global search component. The algorithm evolves a population of candidate solutions using genetic operators such as crossover and mutation.   Probabilistic Methods: The paper also uses probabilistic methods to guide the search towards promising regions of the search space. Specifically, it uses a probability distribution to bias the selection of parents for crossover and mutation.   Theory: The paper presents a theoretical analysis of the proposed algorithm, including convergence properties and a complexity analysis. The authors also discuss the relationship between their algorithm and other optimization methods in the literature.
This paper belongs to the sub-category of AI known as Neural Networks.   Explanation:  The title of the paper explicitly mentions "neural networks," indicating that the focus of the paper is on this sub-category of AI. The abstract also mentions "learning and evolution" in the context of neural networks, further emphasizing the relevance of this sub-category. While other sub-categories of AI may be mentioned or utilized in the paper, the primary focus is on neural networks.
Case Based, Theory  Explanation:  - Case Based: The paper presents a novel approach to determine structural similarity as guidance for adaptation in case-based reasoning (Cbr). It discusses the retrieval, matching, and adaptation of cases, which are key components of Cbr. - Theory: The paper proposes a theoretical approach to advance structural similarity assessment in Cbr. It also mentions that the approach is not restricted to a specific domain, indicating a more general theoretical framework.
Theory.   Explanation: The paper presents a theoretical approach to solving the blame-assignment task in the context of experience-based design and redesign of physical devices. It does not use any specific AI sub-category such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Rule Learning, Case Based.   Rule Learning is present in the text as the authors discuss the formal integrated model of knowledge for design, which includes knowledge for planning steps in design and problem-solving knowledge that supports design. This model is based on the task-structure, which guides both acquisition and application of knowledge.   Case Based is also present in the text as the authors discuss the different types of knowledge that enter the knowledge base of a design support system, including problem-solving knowledge. The authors also give an account of possibilities for problem solving depending on the knowledge that is at the disposal of the system, which is a key aspect of case-based reasoning.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses Bayesian approaches to unsupervised classification, which is a probabilistic method.  Neural Networks: The paper also discusses ART2, a neural net classification algorithm, which falls under the category of neural networks.
Case Based.   Explanation: The paper discusses the development of case-based systems and the need for explaining the processes of case-based reasoning. The concept of a meta-case is introduced as a means of illustrating, explaining, and justifying case-based reasoning. The paper also describes a task-method-knowledge (TMK) model of problem-solving and how meta-cases can be represented in the TMK language. These are all related to the sub-category of Case Based AI.
Probabilistic Methods.   Explanation: The paper discusses the use of statistical decision theory to estimate amino acid frequencies in protein families, which is a probabilistic method. The goal is to minimize the risk function, which is a common approach in probabilistic methods. The paper also presents formulas for adding pseudocounts to the observed data, which is a common technique in probabilistic methods for dealing with sparse data.
Probabilistic Methods, Neural Networks, Theory.   Probabilistic Methods: The paper discusses the bias of learning devices, which can be seen as a probabilistic approach to learning. The authors also mention the use of probability distributions in modelling learning biases.  Neural Networks: The paper discusses generalist models of learning, which can be seen as a type of neural network. The authors also mention the isotropy of bias, which is related to the structure of neural networks.  Theory: The paper proposes a refinement of the notion of innateness, which can be seen as a theoretical contribution to the field of AI. The authors also discuss the characteristics of bias and how they relate to different types of learning models.
Genetic Algorithms.   Explanation: The paper discusses the use of a genetic algorithm for optimizing a mathematical model in economics and econometrics. The title also mentions "Genetic Algorithm" specifically. There is no mention of any other sub-category of AI in the text.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents an artificial neural network based learning approach for handling difficult scenes which will confuse the ALVINN system.   Probabilistic Methods: The paper proposes a saliency map, which is based upon a computed expectation of the contents of the inputs in the next time step, indicating which regions of the input retina are important for performing the task. This approach is based on probabilistic methods.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses the problem of production scheduling in the face of unpredictable demand and stochastic factory output, which requires capturing stochasticity in both production and demands. The Markov Decision Process (MDP) formulation used in the paper is a probabilistic method.  Reinforcement Learning: The paper describes two reinforcement learning methods for generating an approximate value function on the production scheduling domain. The solution to the MDP is a value function which can be used to generate optimal scheduling decisions online.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper specifically deals with the learning of probabilistic concepts, which are boolean functions that exhibit uncertain or probabilistic behavior. The authors develop efficient algorithms for learning natural classes of p-concepts and study the underlying theory of learning p-concepts.  Theory: The paper also focuses on developing a formal model of machine learning for probabilistic concepts that meets the demands of efficiency and generality. The authors study and develop in detail an underlying theory of learning p-concepts.
This paper belongs to the sub-category of AI called Reinforcement Learning.   Explanation: The paper discusses a method for learning by using dynamic feature combination and selection, which is a key aspect of reinforcement learning. Reinforcement learning involves an agent learning to make decisions based on feedback from its environment, and the method described in the paper involves selecting and combining features in order to optimize the agent's performance. This is a core concept in reinforcement learning, making it the most related sub-category of AI.
Rule Learning, Theory.   Rule Learning is present in the text as the paper discusses deductively learned knowledge and how it can be harmful in problem solving. The paper proposes a method called utilization filtering to address this issue.   Theory is also present in the text as the paper discusses the problem of redundancy in deductive problem solvers and proposes a theoretical solution in the form of utilization filtering. The paper also presents experimental results to support the effectiveness of the proposed approach.
Reinforcement Learning.   Explanation: The paper discusses the use of reinforcement learning to learn how to act in real-time using dynamic programming. The authors thank several prominent researchers in the field of reinforcement learning for their contributions and insights. The paper also mentions grants from the National Science Foundation and the Air Force Office of Scientific Research, which are typically awarded for research in the field of reinforcement learning.
Neural Networks. This paper belongs to the sub-category of Neural Networks. The paper proposes a new architecture that maintains spatial relations between input features using LEGION dynamics and slow inhibition. The network selects the largest object in an input scene with many objects and can be adjusted to select several largest objects. The paper also shows that a two-stage selection network gains efficiency by combining selection with parallel removal of noisy regions. The network is applied to select the most salient object in real images. The paper discusses the classical topic of winner-take-all (WTA) networks, which are widely used in unsupervised learning, cortical processing, and attentional control. However, WTA networks do not encode spatial relations in the input, and thus cannot support sensory and perceptual processing where spatial relations are important. The paper proposes a new architecture that overcomes this limitation.
Reinforcement Learning. This paper belongs to the sub-category of Reinforcement Learning as it focuses on developing new RL algorithms for solving average-payoff Markovian decision processes. The paper discusses the limitations of existing RL algorithms that focus on maximizing the discounted sum of payoffs and proposes new algorithms that are specifically designed for maximizing the average payoff received per time step. The paper also presents preliminary empirical results to validate these new algorithms.
Theory.   Explanation: The paper presents algorithms for exactly learning unknown environments that can be described by deterministic finite automata. The focus is on the theoretical aspects of the problem, such as the assumptions made about the learner's capabilities and the running time of the algorithms. There is no mention of case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning in the text. Rule learning is somewhat related, as the algorithms can be seen as learning rules for traversing the automaton, but it is not a major focus of the paper.
Theory.   Explanation: This paper presents a theoretical study of the problem of exploring and mapping an unknown directed graph, making limited assumptions on the environment and providing the robot with a pebble as a means of distinguishing between vertices. The paper focuses on proving upper and lower bounds on the number of pebbles needed for efficient mapping, and provides deterministic algorithms for both cases. There is no application or implementation of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Probabilistic Methods. This paper belongs to the sub-category of Probabilistic Methods in AI. The paper discusses the use of the minimum description length (MDL) principle for learning Bayesian networks from data. Bayesian networks are a probabilistic graphical model used to represent uncertain relationships between variables. The paper analyzes the sample complexity of MDL-based learning procedures for Bayesian networks, which is a probabilistic method for learning.
Probabilistic Methods.   Explanation: The paper is a tutorial on learning with Bayesian networks, which are a type of probabilistic graphical model. The paper explains the principles behind Bayesian networks, how to construct them, and how to use them for inference and prediction. The paper also discusses various algorithms for learning the structure and parameters of Bayesian networks from data. Overall, the paper is focused on probabilistic methods for machine learning and inference.
Reinforcement Learning, Probabilistic Methods.   Reinforcement Learning is the main focus of the paper, as the authors propose extensions to a model-based ARL method called H-learning. The paper discusses how H-learning can be scaled up to handle large state spaces by approximating the domain models and the value function.   Probabilistic Methods are also present in the paper, as the authors propose to represent the action models and reward functions in the form of Bayesian networks. This allows for more efficient learning and inference in domains with large state spaces.
Probabilistic Methods.   Explanation: The paper explicitly mentions Bayesian methods, which are a type of probabilistic modeling. The paper discusses how Bayesian methods can be used to create adaptive models that can update their predictions as new data becomes available. The paper also discusses how Bayesian methods can be used to incorporate prior knowledge into the model, which is a key feature of probabilistic modeling. While other sub-categories of AI may also be relevant to the topic of adaptive models, the focus on Bayesian methods makes probabilistic methods the most closely related sub-category.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes a new neural network architecture called the Incremental Grid Growing Neural Network (IGGNN) for visualizing high-dimensional data. The IGGNN is a type of unsupervised neural network that learns the structure of the data by growing a grid of neurons in an incremental manner.   Probabilistic Methods: The paper uses a probabilistic approach to model the uncertainty in the data. The IGGNN uses a Gaussian mixture model to estimate the probability density function of the data, which is used to assign each data point to a neuron in the grid. The paper also uses a probabilistic measure called the Bayesian Information Criterion (BIC) to determine the optimal number of neurons in the grid.
Reinforcement Learning.   Explanation: The paper focuses on a learning agent that has to learn to solve a set of sequential decision tasks using reinforcement learning. The paper presents a new learning algorithm and a modular architecture that achieves transfer of learning by sharing the solutions of elemental SDTs across multiple composite SDTs. The straightforward application of reinforcement learning to multiple tasks is also discussed. Therefore, reinforcement learning is the most related sub-category of AI in this paper.
Genetic Algorithms, Neural Networks, Reinforcement Learning.   Genetic Algorithms: The paper presents an approach that evolves neural network controllers through genetic algorithms.   Neural Networks: The approach presented in the paper involves evolving neural network controllers.   Reinforcement Learning: The approach presented in the paper learns from a single performance measurement over the entire task of grasping an object, which is a characteristic of reinforcement learning.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as the authors address the issue of combining function approximation and RL. The paper presents a new function approximator based on soft state aggregation, a theory of convergence for RL with soft state aggregation, and a heuristic adaptive state aggregation algorithm.   Theory is also a relevant sub-category, as the paper presents new theoretical results on RL with soft state aggregation, including a convergence proof and an intuitive understanding of the effect of state aggregation on online RL.
Reinforcement Learning.   Explanation: The paper introduces a class of incremental learning procedures specialized for prediction, which is a key aspect of reinforcement learning. The paper specifically focuses on the methods of temporal differences, which have been used in reinforcement learning algorithms such as Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic. The paper proves the convergence and optimality of these methods for special cases and relates them to supervised-learning methods. The paper argues that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. Overall, the paper is primarily focused on reinforcement learning methods for prediction.
Reinforcement Learning.   Explanation: The paper discusses two Dyna architectures that integrate trial-and-error (reinforcement) learning and execution-time planning into a single process. The Dyna-Q architecture is based on Watkins's Q-learning, a type of reinforcement learning. The paper also mentions that the Dyna-PI architecture can be related to existing AI ideas such as evaluation functions and universal plans (reactive systems), which are also commonly used in reinforcement learning. Therefore, reinforcement learning is the most related sub-category of AI in this paper.
Reinforcement Learning. This paper belongs to the sub-category of Reinforcement Learning as it discusses the use of reinforcement learning systems with parameterized function approximators such as neural networks to generalize between similar situations and actions. The paper presents positive results for control tasks using sparse-coarse-coded function approximators and online learning. The paper also discusses the limitations of using actual outcomes ("rollouts") and suggests that reinforcement learning can work robustly in conjunction with function approximators.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper discusses various neural network structures such as radial basis functions, CMACs, Kohonen's self-organizing maps, and perceptrons. It also proposes a new neural network structure for online learning.  Reinforcement Learning: The paper mentions that the proposed neural network structure can be used as a component of reinforcement learning systems. It also compares the performance of the proposed method with other reinforcement learning methods.
Reinforcement Learning, Theory  Reinforcement learning is present in the paper as the authors consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework, which is a common problem in reinforcement learning.   Theory is also present in the paper as the authors develop a decision-theoretic generalization of on-line learning and adapt the multiplicative weight-update rule of Littlestone and Warmuth to this model, yielding bounds that are applicable to a considerably more general class of learning problems. They also apply the resulting learning algorithm to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in R n.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes a novel activation function for an on-line learning algorithm that can be easily implemented on a neural network-like model.   Probabilistic Methods: The paper uses the Gram-Charlier expansion to evaluate the average mutual information (MI) of the outputs, which is a probabilistic method. The natural gradient approach is also used to minimize the MI.
Neural Networks, Theory.  Neural Networks: The paper discusses the use of a neural network as a classifier and proposes a new error bound for the classifier chosen by early stopping.  Theory: The paper presents a new theoretical result, the Central Classifier Bound, which provides a tighter bound on the generalization error of the classifier chosen by early stopping. The paper also discusses the theoretical background and motivation for using early stopping as a regularization technique.
Neural Networks. This paper belongs to the sub-category of Neural Networks as it investigates the ability of a novel artificial neural network, bp-som, to avoid overfitting. The paper combines a multi-layered feed-forward network with Kohonen's self-organising maps during training to find adequate hidden-layer representations. The paper shows that bp-som outperforms standard backpropagation and back-propagation with a weight decay when dealing with the problem of overfitting.
Probabilistic Methods.   Explanation: The paper describes a model of iterated belief revision that extends the AGM theory of revision to account for the effect of a revision on the conditional beliefs of an agent. The model uses probability theory to determine acceptance conditions for arbitrary right-nested conditionals. The paper also discusses the use of minimal conditional revision, which ensures that an agent makes as few changes as possible to the conditional component of its belief set. Overall, the paper is focused on probabilistic methods for belief revision.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper focuses on the learnability of discrete distributions, which is a problem that falls under the domain of probabilistic methods. The authors use concepts from information theory and probability theory to analyze the learnability of these distributions.  Theory: The paper presents a theoretical analysis of the learnability of discrete distributions. The authors derive bounds on the sample complexity required to learn these distributions, and they prove lower bounds on the performance of any learning algorithm. The paper also discusses the relationship between the sample complexity and the structure of the distribution being learned.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as it discusses the use of function approximation in this subfield of AI. The paper also delves into theoretical properties and conditions of the combination of reinforcement learning and function approximation, making it relevant to the Theory subcategory.
Neural Networks.   Explanation: The paper describes a new self-organising learning algorithm for a network of non-linear units, which is able to separate statistically independent components in the inputs. This algorithm is a type of neural network, which is a sub-category of AI that is inspired by the structure and function of the human brain. The paper also discusses the properties of non-linearities in the transfer function, which are a key feature of neural networks.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper extensively discusses graphical models, which are a type of probabilistic model used for data analysis and empirical learning. The paper also discusses algorithms for learning from data, such as Gibbs sampling and the expectation maximization algorithm, which are probabilistic methods.  Neural Networks: The paper briefly mentions techniques for feed-forward networks, which are a type of neural network, and how they can be synthesized from their graphical specification.
Rule Learning, Theory  Explanation:  The paper discusses the utility problem in speedup learning and proposes a parameterized model and a mechanism to limit the amount of learned knowledge for optimal performance. It also presents a simple selection strategy for improving the speed of a problem solver by retaining all control rules derived from a training problem explanation. These aspects are related to rule learning. The paper also discusses the shape of the learning curve and attempts to relate domain characteristics to it, which is related to theory. The other sub-categories of AI (Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning) are not directly mentioned in the text.
Neural Networks, Case Based.   Neural Networks: The paper compares different types of neural networks, such as single and multi-layered perceptrons and radial-basis functions, for the classification of handwritten digits and speech phonemes. The authors also discuss the architecture and learning process of these networks.  Case Based: The paper discusses the use of kernel estimators, such as k-nearest neighbor, Parzen windows, generalized k-nn, and Grow and Learn, which are memory-based methods that rely on previously seen examples to classify new data. This falls under the category of case-based reasoning.
Case Based, Rule Learning.   The paper describes ongoing research on a method to acquire adaptation knowledge from experience, which is a key characteristic of Case Based Reasoning (CBR). The method uses reasoning from scratch to build up a library of adaptation cases, which involves the creation of task-specific rules for case adaptation. This process of rule creation is a form of Rule Learning.
Theory.   Explanation: This paper presents a theoretical framework for modeling introspective reasoning, specifically in the context of memory search. It does not focus on any specific sub-category of AI, such as case-based reasoning, neural networks, or reinforcement learning. Instead, it proposes a general approach to representing self-knowledge in order to improve memory processing.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the use of machine learning techniques for knowledge acquisition in complex engineering domains. It suggests that combining the strengths of several complementing learning techniques can help overcome the weaknesses of individual techniques. This approach involves probabilistic methods, which are used to model uncertainty and make predictions based on probability distributions.  Rule Learning: The paper discusses the macro and micro perspectives of multistrategy learning. The macro perspective involves decomposing a complex learning task into well-defined learning tasks, while the micro perspective involves designing multistrategy learning techniques for each task. Rule learning is one of the techniques that can be used in the micro perspective to support the acquisition of knowledge for each task.
Theory.   Explanation: The paper presents a theoretical framework for introspective reasoning and multistrategy learning, and does not focus on any specific sub-category of AI such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper introduces a learning algorithm for unsupervised neural networks. It discusses the adaptation of weights in the network using a local delta rule.   Probabilistic Methods: The algorithm is derived from a mean field approximation for large, layered sigmoid belief networks. The paper shows how to infer the statistics of these networks without resort to sampling by solving the mean field equations, which relate the statistics of each unit to those of its Markov blanket. The statistics of the network are used as target values for weight adaptation.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods: The paper discusses the types of noise that may occur in relational learning systems, which can be seen as a probabilistic approach to modeling uncertainty in the data. Additionally, the two approaches to addressing noise in the relational concept learning algorithm involve using probabilistic models to estimate the likelihood of noise in the data.  Rule Learning: The paper focuses on developing algorithms for learning relational concepts, which can be seen as a form of rule learning. The two approaches to addressing noise in the relational concept learning algorithm involve modifying the rules used to learn the concepts.
Neural Networks.   Explanation: The paper discusses the use of the Error Propagation Algorithm to train a neural network to identify chaotic dynamics. The focus is on the use of neural networks as a tool for learning and prediction, making it clear that this paper belongs to the Neural Networks sub-category of AI.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of connectionist learning, which is a type of machine learning that involves the use of neural networks. The authors propose a method for selecting input variables for connectionist learning, which is based on the use of nonparametric statistical tests.  Probabilistic Methods: The authors use nonparametric statistical tests to select input variables for connectionist learning. These tests are based on probabilistic methods, which involve the use of probability distributions to model uncertainty and variability in data. The authors also discuss the use of Bayesian methods for selecting input variables, which are a type of probabilistic method that involves the use of Bayes' theorem to update probabilities based on new data.
Reinforcement Learning.   Explanation: The paper explicitly mentions "a reinforcement learning paradigm" in the title and describes the use of an external reinforcement signal to guide the learning process. While other sub-categories of AI may also be involved in the implementation of the system, such as neural networks for processing sensor data, the focus of the paper is on the use of reinforcement learning for mobile robot navigation.
This paper does not belong to any of the sub-categories of AI listed. It is a neuroscience paper that discusses the role of corticogeniculate feedback in brightness perception and illusory contours.
Theory.   Explanation: The paper discusses the theoretical understanding of the problem of approximating smooth L p -functions from spaces spanned by the perturbed integer translates of a radially symmetric function. It does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Theory.   Explanation: The paper is focused on investigating and developing generalizations of the Probably Approximately Correct (PAC) learning model, with the ultimate goal of agnostic learning. The paper presents theoretical results and algorithms for this type of learning, without relying on any specific sub-category of AI such as neural networks or reinforcement learning.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the role of the visual cortex in figure-ground separation, which involves the processing of visual information through neural networks in the brain. The authors describe how different regions of the visual cortex are responsible for different aspects of this process, such as detecting edges and grouping objects based on their spatial relationships.  Probabilistic Methods: The paper also discusses how probabilistic models can be used to explain some of the phenomena observed in figure-ground separation. For example, the authors describe how Bayesian inference can be used to estimate the probability that a given region of the visual field belongs to the foreground or background. They also discuss how probabilistic models can be used to account for the effects of context and prior knowledge on visual perception.
Sub-categories of AI that this paper belongs to are Case Based and Planning.   Case Based: The paper discusses the design and implementation of a case-based planning framework, which involves using past experiences (cases) to inform future planning decisions. The authors describe how the system retrieves relevant cases, adapts them to the current problem, and uses them to generate plans.  Planning: The paper focuses on the development of a partial-order planner, which is a type of AI system that generates plans by breaking them down into smaller subgoals and then ordering those subgoals based on their dependencies. The authors describe how their case-based planning framework is integrated with the partial-order planner to improve its performance.
Case Based, Reinforcement Learning  Explanation:  - Case Based: The paper describes the design and implementation of a framework for replaying previous plan derivations, which is based on storing and retrieving cases. The framework employs explanation-based learning (ebl) techniques to improve the retrieval of cases.  - Reinforcement Learning: The paper mentions that the framework replays previous plan derivations and extends them to obtain a complete solution for a new problem. When the replayed path cannot be extended, ebl techniques are employed to identify the features of the new problem which prevent this extension. These features are then added as censors on the retrieval of the stored case. This process can be seen as a form of reinforcement learning, where the system learns from its past experiences to improve its future performance.
Neural Networks. This paper describes an alternative class of gradient-based systems consisting of two feedforward nets that learn to deal with temporal sequences using fast weights. The method offers the potential for STM storage efficiency. Various learning methods are derived. Two experiments with unknown time delays illustrate the approach. The paper focuses on the use of neural networks for supervised sequence learning.
Probabilistic Methods.   Explanation: The paper describes a method for reconstructing evolutionary trees using probabilistic methods such as maximum likelihood estimation. The authors also mention the use of simple methods like neighbor-joining, which are based on probabilistic models of sequence evolution. The paper does not mention any other sub-categories of AI such as case-based reasoning, genetic algorithms, neural networks, reinforcement learning, rule learning, or theory.
Rule Learning, Theory.   Explanation: The paper discusses the learning of search-control heuristics in a logic program, which is a form of rule learning. The paper also presents a new first-order induction algorithm for learning useful syntactic and semantic categories, which is a theoretical contribution to the field of machine learning.
Theory  Explanation: The paper proposes a new technique for multi-path execution in architectures with little or no support for predicated instructions. It does not use any AI techniques such as neural networks, genetic algorithms, or reinforcement learning. The paper is focused on the theoretical concept of dynamic predication and its implementation in non-predicated instruction set architectures. Therefore, the paper belongs to the sub-category of AI called Theory.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper proposes a biologically plausible and minimalistic model of ICx self-organization using a two-dimensional Kohonen map to model the ICx.   Reinforcement Learning: The model proposed in the paper involves a learn signal based on the owl's visual attention. When the visual attention is focused in the same spatial location as the auditory input, the learn signal is turned on, and the map is allowed to adapt. This is an example of reinforcement learning, where the system receives a signal (in this case, the learn signal) based on its behavior (in this case, the owl's visual attention) to adapt and improve its performance (in this case, the auditory map).
Theory  Explanation: This paper proposes an alternative explanation for the bimodality observed in the ensemble correlation between two sequential visits to the same environment in old animals. The authors offer a theoretical explanation based on the interaction between orthogonalization properties in the dentate gyrus (DG) region of hippocampus and errors in self-localization. The paper does not use or apply any specific AI sub-category such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper uses a hidden-state reinforcement learning paradigm based on the Partially Observable Markov Decision Process (POMDP) to implement visual attention.   Reinforcement Learning: The attention module selects targets to foveate based on the goal of successful recognition, and uses a new multiple-model Q-learning formulation. The paper also mentions using reinforcement learning to guide the active camera to foveate salient features.
Genetic Algorithms.   Explanation: The paper is specifically focused on comparing two different speciation methods within the Genetic Algorithm framework. While other sub-categories of AI may be relevant to the problem of finding multiple optima in a search space, the paper's primary focus is on the use of GA and its extensions.
Case Based, Rule Learning  Explanation:   - Case Based: The paper discusses the integration of a legacy database containing design information with a heterogeneous knowledge system for design. This involves the accumulation of huge volumes of design information, which can be seen as a form of case-based reasoning.  - Rule Learning: The paper proposes a method-specific data-to-knowledge compilation approach for integrating the legacy database with the knowledge system. This involves converting data accessed from the database into a form appropriate to the problem-solving method used in the knowledge system, which can be seen as a form of rule learning.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper investigates the performance of the mixture of experts (ME) model, which is a type of neural network, on time series prediction. The ME model is compared to single networks, and the paper discusses how the ME model discovers regimes and characterizes sub-processes.  Probabilistic Methods: The paper discusses how the ME model matches the noise level of the data, which helps to avoid overfitting. The ME model is also able to characterize sub-processes through their variances, which is a probabilistic approach.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper investigates the ability of an attractor network to acquire view invariant visual representations. The network dynamics developed by Griniasty, Tsodyks & Amit (1993) are used to achieve this goal.   Probabilistic Methods: The paper uses an independent component (ICA) representation of the faces for the input patterns. The ICA representation has advantages over the principal component representation (PCA) for viewpoint-invariant recognition both with and without the attractor network. This suggests that ICA is a better representation than PCA for object recognition.
Genetic Algorithms.   Explanation: The paper specifically focuses on the effects of relaxed synchronization on a parallel genetic algorithm. The experiments and analysis are centered around this type of algorithm and its numerical and parallel efficiency. While other sub-categories of AI may be involved in the implementation or application of the algorithm, genetic algorithms are the primary focus of the paper.
Theory.   Explanation: The paper presents a theoretical approach to decision trees, generalizing key ideas from statistical learning theory and support vector machines. It proposes a method for generating logically simple decision trees with multivariate linear or nonlinear decisions, and characterizes the "optimal" decision tree. The paper does not involve the implementation or application of any specific AI sub-category, but rather presents a theoretical framework for decision tree construction.
Neural Networks, Probabilistic Methods, Theory.   Neural Networks: The paper discusses different types of neural networks, including Radial Basis Functions and some perceptron-like neural networks with one-hidden layer.  Probabilistic Methods: The paper mentions the probabilistic interpretation of regularization, where different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces.  Theory: The paper presents a theoretical framework for understanding regularization principles and their relationship to different types of approximation schemes, including neural networks. It also introduces new classes of smoothness functionals that lead to different types of basis functions.
Probabilistic Methods, Neural Networks  The paper belongs to the sub-category of Probabilistic Methods because it uses a probabilistic framework to model the segmentation problem. Specifically, it uses a Markov Random Field (MRF) model to capture the spatial dependencies between the voxels in the image. The paper also uses a Bayesian framework to estimate the parameters of the MRF model.  The paper also belongs to the sub-category of Neural Networks because it uses a neural network to learn the appearance model of the object being segmented. Specifically, it uses a Convolutional Neural Network (CNN) to learn a feature representation of the object from a set of training images. The learned features are then used to compute the likelihood of each voxel belonging to the object.
Neural Networks.   Explanation: The paper discusses the architecture of a Kohonen network, which is a type of neural network. The figure provided in the paper also shows the connections between input neurons in the network. Therefore, the paper is primarily related to the sub-category of AI known as Neural Networks.
Neural Networks, Case Based  Explanation:   Neural Networks: The paper extensively discusses the use of example-based learning methods for analyzing and synthesizing face images. This involves developing networks for performing analysis tasks such as pose and expression estimation, face recognition, and face detection in cluttered scenes. These networks are based on neural networks, which are a type of machine learning algorithm that can learn from examples.  Case Based: The paper also mentions the use of descriptive parameters for labeling example face images and "near miss" faces for the problem of face detection. This is an example of case-based reasoning, which involves solving new problems by adapting solutions that were used to solve similar problems in the past. In this case, the descriptive parameters are used to train the example-based learning networks for face analysis and synthesis.
Case Based, Theory  Case-based reasoning is the main focus of the paper, as it describes how a reasoner can improve its understanding of a domain through the use of past experiences stored in memory. The paper also presents a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in novel situations, which complements work in case-based reasoning by providing mechanisms for building a case library. Therefore, the paper belongs to the sub-category of Case Based. Additionally, the paper presents a theory of incremental learning, which can be considered a theoretical aspect of AI, placing it in the sub-category of Theory.
Probabilistic Methods.   Explanation: The paper presents a statistical model of genes in DNA using a Generalized Hidden Markov Model (GHMM) to assign probabilities to transitions between states and the generation of each nucleotide base given a particular state. Machine learning techniques are applied to optimize these probabilities using a standardized training set. The GHMM is flexible and modular, providing simple solutions for integrating cardinality constraints, reading frame constraints, "indels", and homology searching. The paper also discusses the performance of the model compared to other gene-finding systems.
Theory.   Explanation: This paper belongs to the sub-category of AI theory as it deals with the theoretical analysis of repeated stage games with bounded players. The paper does not involve the application of any specific AI technique such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. Instead, it focuses on developing a theoretical framework to handle the problem of optimality and domination in such games.
Reinforcement Learning, Theory.   Reinforcement learning is present in the paper as the authors study the problem of efficiently learning to play a game optimally against an unknown adversary. They introduce new classes of adversaries and give efficient algorithms for learning to play penny-matching and contract.   Theory is also present in the paper as the authors expand the scope of research on playing games against finite automata and give the most powerful positive result to date for learning to play against finite automata. They also introduce new notions of games against recent history adversaries and statistical adversaries.
Probabilistic Methods.   Explanation: The paper describes the use of decision trees and scoring functions based on probability estimates to identify coding and noncoding regions in DNA sequences. The scoring functions are sets of decision trees that are combined to give a probability estimate, and the optimal segmentation of a DNA sequence into exons and introns is dependent on a separate scoring function that assigns a score reflecting the probability that a subsequence is an exon. Therefore, the paper primarily belongs to the sub-category of Probabilistic Methods in AI.
Neural Networks.   Explanation: The paper specifically discusses the use of backpropagation based neural networks to implement a phase of the computational intelligence process in the PYTHIA expert system. The focus is on using neural networks to identify the class of predefined models whose characteristics match the ones of the specified PDE based application. While other sub-categories of AI may also be relevant to the overall PYTHIA system, this paper primarily focuses on the use of neural networks.
Rule Learning, Probabilistic Methods  Explanation:  The paper describes a learning algorithm based on a reduction criteria called a - reduction, which aims to induce a compact rule set describing the basic dependencies within a set of data. This falls under the sub-category of AI known as Rule Learning.   Additionally, the paper analyzes the learning algorithm using probability approximate correct learning results, which falls under the sub-category of AI known as Probabilistic Methods.
Probabilistic Methods, Theory  Probabilistic Methods: This paper belongs to the sub-category of probabilistic methods as it discusses the use of Bayesian networks to mediate instrumental variables. The authors propose a probabilistic graphical model that can be used to estimate causal effects in the presence of unobserved confounding variables.  Theory: This paper also belongs to the sub-category of theory as it presents a theoretical framework for mediating instrumental variables. The authors provide a detailed explanation of the problem of unobserved confounding variables and how it can be addressed using instrumental variables. They also discuss the limitations of existing methods and propose a new approach based on Bayesian networks.
Neural Networks, Theory.   Neural Networks: The paper discusses the performance of learning by networks with localized units, which are a type of neural network. The proposed learning algorithm also involves altering receptive field properties during learning, which is a common technique in neural network training.  Theory: The paper analyzes the effect of unit receptive field parameters on the performance of neural learning and proposes a new learning algorithm based on this analysis. This involves developing a theoretical understanding of how different factors affect neural learning and using this understanding to improve the learning process.
Reinforcement Learning, Probabilistic Methods.   Reinforcement Learning is the main sub-category of AI that this paper belongs to. The paper proposes reinforcement learning algorithms for the solution of Semi-Markov Decision Problems. The algorithms are based on the ideas of asynchronous dynamic programming and stochastic approximation, which are common techniques in reinforcement learning.  Probabilistic Methods are also present in the paper, as Semi-Markov Decision Problems are a type of probabilistic model. The paper discusses Bellman's optimality equation in the context of Semi-Markov Decision Problems, which is a probabilistic equation that describes the optimal value function for a given policy. The paper also applies the proposed algorithms to a simple queueing system, which is a probabilistic model.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of the Hierarchical Mixture of Experts (HME) in classification, which is a type of neural network architecture.   Probabilistic Methods: The paper mentions the use of the Expectation Maximisation algorithm, which is a probabilistic method commonly used in machine learning for parameter estimation. The HME also involves a probabilistic approach to combining multiple models.
Probabilistic Methods.   Explanation: The paper discusses an anytime procedure for approximate evaluation of probabilistic networks, which involves varying the granularity of the state spaces of the nodes. This is a key aspect of probabilistic methods in AI, which deal with uncertainty and probability distributions. The paper does not mention any other sub-categories of AI.
Theory  Explanation: The paper discusses the lack of a satisfactory, generic complexity measure for learning problems and proposes a new idea to alleviate this issue. It does not focus on any specific sub-category of AI such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. Therefore, the paper belongs to the Theory sub-category of AI.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper investigates the statistical bias of backpropagation, which is a type of neural network algorithm. The study involved applying the algorithm to a wide range of learning problems using a variety of different internal architectures.  Probabilistic Methods: The paper discusses the statistical effects that may need to be exploited in supervised learning and proposes that learning algorithms will typically have some form of bias towards particular classes of effect. It also shows how the existence of statistical bias in backpropagation constitutes a weakness in the algorithm's ability to discount noise.
Neural Networks.   Explanation: The paper discusses the use of a specific type of neural network, the scatter-partitioning Gaussian RBF model, for function regression and image segmentation. The paper also mentions the need for further studies in the framework of RBF networks, indicating a focus on neural network-based approaches.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses the use of artificial neural networks for learning symbolic rules. It compares the approach to traditional symbolic learning algorithms and highlights the advantage of neural networks in forming concept representations.   Rule Learning: The paper specifically focuses on the extraction of symbolic rules from trained neural networks using the NofM extraction algorithm and soft weight-sharing training method. It compares the extracted rules to those learned using the C4.5 system and evaluates their accuracy and comprehensibility.
Neural Networks. This paper is primarily focused on parallel algorithms for simulating neural networks, and references numerous studies and techniques related to neural networks.
Reinforcement Learning, Rule Learning, Theory.   Reinforcement Learning is present in the paper as the focus is on how a system can improve its performance at a given task through learning and introspection.   Rule Learning is present as the paper proposes a taxonomy of possible reasoning failures and their declarative representations, which can be seen as a set of rules for identifying and addressing these failures.   Theory is present as the paper presents a theory of Meta-XPs, which are explanation structures that help the system identify failure types and choose appropriate learning strategies.
Theory.   This paper presents a theoretical framework for input to state stabilizability for parameterized families of systems. It does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Rule Learning, Neural Networks.   The paper describes an approach to rule extraction from artificial neural networks, which falls under the sub-category of Rule Learning. The use of Backpropagation-style neural networks and Validity Interval Analysis (VI-Analysis) is also discussed, which falls under the sub-category of Neural Networks.
Theory.   Explanation: The paper presents a theoretical framework for feature subset selection based on Information Theory and proposes an efficient algorithm for feature selection that approximates the optimal feature selection criterion. The paper does not discuss any specific AI sub-category such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Reinforcement Learning, Probabilistic Methods, Neural Networks  Reinforcement Learning is the primary sub-category of AI that this paper belongs to. The title of the paper explicitly mentions Reinforcement Learning, and the abstract describes the paper as being about "Reinforcement Learning for Planning and Control." The paper discusses various reinforcement learning algorithms and their applications in planning and control.  Probabilistic Methods are also present in the paper, as the authors discuss the use of probabilistic models in reinforcement learning. For example, the authors mention the use of Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs) in reinforcement learning.  Neural Networks are also mentioned in the paper, as the authors discuss the use of neural networks in reinforcement learning. For example, the authors mention the use of deep reinforcement learning, which involves using deep neural networks to approximate the value function or policy in reinforcement learning.
Case Based, Rule Learning  Explanation:  - Case Based: The paper addresses the problem of case-based learning in the presence of irrelevant features. It reviews previous work on attribute selection and presents a new algorithm, Oblivion, that carries out greedy pruning of oblivious decision trees, which effectively store a set of abstract cases in memory. - Rule Learning: The Oblivion algorithm prunes decision trees based on the relevance of features, which can be seen as learning rules for selecting relevant features. The paper also discusses the implications of their experiments for future work on irrelevant features, which could involve further rule learning approaches.
Neural Networks, Rule Learning  Explanation:   Neural Networks: The paper discusses the use of neural networks to learn Boolean concepts in the presence of many irrelevant features. Specifically, the authors propose a method called "Boolean Feature Selection" that uses a neural network to identify relevant features and then trains a second neural network on the selected features.  Rule Learning: The paper also discusses the use of rule learning algorithms to learn Boolean concepts. Specifically, the authors compare their proposed method to a rule learning algorithm called RIPPER and show that their method outperforms RIPPER on several datasets.
Reinforcement Learning, Genetic Algorithms.   Reinforcement learning is the main focus of the paper, as the authors explore the use of reinforcement learning to shape a robot to perform a predefined target behavior. They connect both simulated and real robots to a learning classifier system with an extended genetic algorithm.   Genetic algorithms are also present in the paper, as the authors use a parallel implementation of a learning classifier system with an extended genetic algorithm to classify different kinds of Animat-like behaviors. They also show that classifier systems with genetic algorithms can be practically employed to develop autonomous agents.
Probabilistic Methods.   Explanation: The paper discusses probabilistic inference in Bayesian belief networks and proposes a new method for approximate knowledge representation based on the property of similarity of states. The focus is on reducing the computational complexity of probabilistic inference in networks with multiple similar states. While the paper does not explicitly mention other sub-categories of AI, it is clear that the approach taken falls under the umbrella of probabilistic methods.
Rule Learning, Reinforcement Learning.   Rule Learning is present in the text as the paper discusses the use of decision trees to learn a set of primitive actions. Decision trees are a type of rule-based learning algorithm.   Reinforcement Learning is also present in the text as the paper discusses the use of a reward function to guide the learning process. The paper states that "the learning algorithm is guided by a reward function that evaluates the quality of the learned set of actions." This is a key characteristic of reinforcement learning.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses the use of Bayesian belief networks and Helmholtz machines, which are probabilistic models. The algorithm presented in the paper also uses EM and Gibbs sampling, which are probabilistic methods.  Neural Networks: The model presented in the paper can be interpreted as a stochastic recurrent network, which is a type of neural network. The algorithm also uses feedback from higher levels to resolve ambiguity in lower-level states, which is a characteristic of neural networks.
Theory.   Explanation: The paper focuses on studying an extension of the distribution-free model of learning introduced by Valiant, which is a theoretical model of machine learning. The paper presents general methods for bounding the rate of error tolerable by any learning algorithm, efficient algorithms tolerating nontrivial rates of malicious errors, and equivalences between problems of learning with errors and standard combinatorial optimization problems. The paper does not discuss any specific implementation or application of machine learning algorithms, which are the focus of other sub-categories such as Neural Networks, Reinforcement Learning, or Rule Learning.
Probabilistic Methods.   Explanation: The paper discusses the problem of computing the posterior probability of a model class, which is a fundamental concept in Bayesian statistics and probabilistic modeling. The authors investigate various methods for approximating this posterior, which is a common problem in probabilistic methods. The specific model family they use for their experiments is finite mixture distributions, which is a probabilistic model. Therefore, this paper belongs to the sub-category of AI known as Probabilistic Methods.
Probabilistic Methods.   Explanation: The paper explores the use of finite mixture models for building decision support systems capable of sound probabilistic inference. The formulation of the model construction problem is in the Bayesian framework for finite mixture models, and Bayesian inference is performed given such a model. The model construction problem can be seen as missing data estimation and a realization of the Expectation-Maximization (EM) algorithm is described for finding good models. The comparison of results is based on the best results reported in the literature on the datasets in question. All of these aspects are related to probabilistic methods in AI.
Case Based, Reinforcement Learning  Explanation:  - Case Based: The paper discusses the use of a model of reasoning behavior to allow a reasoner to introspectively detect and repair failures of its own reasoning process. This involves using past cases to inform future reasoning.  - Reinforcement Learning: The ROBBIE system implements a model of its planning processes to improve the planner in response to reasoning failures. This involves using feedback (reinforcement) to adjust the system's behavior.
Reinforcement Learning, Theory.  Explanation:  - Reinforcement Learning: The paper is primarily focused on reinforcement learning algorithms for multi-criteria sequential decision making problems.  - Theory: The paper discusses the structural properties of these problems and derives asymptotically optimal decision-making algorithms.
Probabilistic Methods.   Explanation: The paper discusses graphical models, which are a type of probabilistic method used in AI to represent and reason about uncertainty in multivariate distributions. The paper specifically focuses on the Markov equivalence of different types of graphical models, including undirected graphs, acyclic directed graphs, and chain graphs. The paper uses graph-theoretic methods to characterize the Markov equivalence of these models.
Probabilistic Methods.   Explanation: The paper discusses the construction of Bayesian network models using a stochastic simulated annealing algorithm, which is a probabilistic method for searching the model space. The paper also focuses on a specific type of Bayesian network structure called Bayesian prototype trees, which have a polynomial time algorithm for Bayesian reasoning.
Probabilistic Methods.   Explanation: The paper discusses the use of probabilities and intervals to represent them, as well as the computation of probabilities for compound events based on knowledge of underlying distributions. These are all key concepts in probabilistic methods, which involve using probability theory to model and analyze uncertain events or systems. The other sub-categories of AI listed (Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, Theory) are not directly relevant to the content of this paper.
Theory.   Explanation: The paper is a theoretical work that discusses the concepts of stabilization and input-to-state stability in control systems. It does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. Therefore, the paper belongs to the sub-category of AI theory.
Rule Learning, Theory.   The paper discusses the use of heuristic classification and concept learning in weak-theory domains, which involves the development of rules and theories to classify and understand data. The authors also discuss the limitations of traditional rule-based systems and propose a new approach that combines rule learning with heuristic classification. Therefore, the paper is primarily focused on rule learning and the development of theories in AI.
Reinforcement Learning.   Explanation: The paper presents U-Tree, a reinforcement learning algorithm that uses selective attention and short-term memory to solve a highway driving task. The paper discusses how U-Tree combines the advantages of work in instance-based (or memory-based) learning and work with robust statistical tests for separating noise from task structure. The paper also mentions related work on Prediction Suffix Trees, Parti-game, G-algorithm, and Variable Resolution Dynamic Programming, all of which are related to reinforcement learning.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses a measure for feature selection that is fast to compute and complete but not exhaustive. This measure is based on probabilities and is used to search for relevant features.   Theory: The paper discusses the problem of feature selection and proposes a new measure that is monotonic and fast to compute. The authors also discuss the limitations of existing error- or distance-based measures and explain why they are not monotonic. The paper presents experiments to verify the effectiveness of the proposed measure.
This paper does not belong to any of the sub-categories of AI listed. It is focused on a hardware mechanism for dynamic reordering of memory references, and does not involve any AI techniques or algorithms.
Reinforcement Learning.   Explanation: The paper describes a method based on reinforcement learning algorithms, such as dynamic programming and Q-learning, for the dialogue agent to learn to choose an optimal dialogue strategy. The empirical component uses the PARADISE evaluation framework to provide the performance function needed by the learning algorithm. There is no mention of case-based, genetic algorithms, neural networks, probabilistic methods, rule learning, or theory in the text.
Rule Learning, Probabilistic Methods  Explanation:  - Rule Learning: The paper discusses Reduced Error Pruning, which is a rule learning algorithm used in Inductive Logic Programming. The proposed method, Incremental Reduced Error Pruning, is also a rule learning algorithm.  - Probabilistic Methods: The paper mentions "noisy domains", which suggests the presence of uncertainty or probability in the data. The proposed method also uses a probabilistic approach to determine which rules to prune.
Neural Networks.   Explanation: The paper discusses the use of neural networks for EEG pattern recognition and classification. The authors compare three different representations of EEG signals and use a two-layer neural network for classification. The paper does not mention any other sub-categories of AI such as case-based reasoning, genetic algorithms, probabilistic methods, reinforcement learning, rule learning, or theory.
Reinforcement Learning. This is the main topic of the paper and is discussed extensively throughout. Other sub-categories of AI, such as Neural Networks and Probabilistic Methods, are mentioned briefly in the context of their use in reinforcement learning, but they are not the main focus of the paper.
Reinforcement Learning, Rule Learning.   Reinforcement Learning is present in the text as the XCSM classifier system is a type of reinforcement learning algorithm. The paper discusses how XCSM performs in non-Markovian environments, which is a common setting for reinforcement learning problems.   Rule Learning is also present in the text as XCSM is a rule-based learning algorithm. The paper discusses how XCSM evolves optimal solutions through the use of rules and how the exploration strategies employed with XCS may not be adequate for complex non-Markovian environments.
Genetic Algorithms, Neural Networks, Probabilistic Methods.   Genetic Algorithms: The paper discusses the resemblance of the proposed method with genetic algorithms. The random generation of n-bit vectors and the selection of the best solutions within the neighborhood are similar to the concepts of crossover and selection in genetic algorithms.  Neural Networks: The paper introduces the concept of Hebbian learning rule, which is a well-known concept in the theory of artificial neural networks. The probability vector is updated using this rule, which is a form of unsupervised learning.  Probabilistic Methods: The main concept of the proposed method is the probability vector, which determines the probabilities of appearance of '1' entries in n-bit vectors. This vector is used for the random generation of n-bit vectors that form a neighborhood. The process is repeated until the probability vector entries are close either to zero or to one, which determines the optimal solution. This approach is based on probabilistic methods.
Reinforcement Learning, Probabilistic Methods, Theory.   Reinforcement Learning is present in the paper as the authors discuss the limitations of uninformed learning and the need for reinforcement learning to overcome these limitations. They also mention the use of reinforcement learning in trading spaces.   Probabilistic Methods are present in the paper as the authors discuss the use of Bayesian networks in modeling uncertainty and making decisions in trading spaces.   Theory is present in the paper as the authors discuss the theoretical underpinnings of uninformed learning and the need for a more informed approach to learning in complex environments. They also discuss the theoretical implications of their findings for the field of AI.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper discusses the implementation of the MIN-FEATURES bias, which is a rule-based approach to learning. The paper introduces algorithms such as FOCUS-2 and Mutual-Information-Greedy, which are rule-based methods for identifying relevant features.   Probabilistic Methods are also present in the text as the paper discusses the use of heuristics such as Mutual-Information-Greedy, Simple-Greedy, and Weighted-Greedy, which are probabilistic methods for approximating the MIN-FEATURES bias. These heuristics employ greedy algorithms that trade optimality for computational efficiency.
Probabilistic Methods.   Explanation: The paper describes a statistical approach to decision tree modeling, which involves modeling each decision in the tree parametrically and generating an output from an input and a sequence of decisions. The resulting model yields a likelihood measure of goodness of fit, allowing ML and MAP estimation techniques to be utilized. The paper also discusses a hidden Markov version of the tree for data sequences that have temporal dependencies. These are all characteristics of probabilistic methods in AI.
Probabilistic Methods.   Explanation: The paper discusses the use of probabilistic methods to infer the binary vector s given z and A, and assumptions about the statistical properties of s and n. The authors use a free energy minimization algorithm to solve this problem, which is a common approach in probabilistic modeling and inference.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper discusses the development of visual perception and suggests computational models that can yield insights into this process. These models likely involve neural networks, which are modeled after the structure and function of the brain.  Reinforcement Learning: The paper discusses the role of experience in the development of the visual system, and how carefully controlled manipulation of the environment can reveal specific developmental or learning mechanisms. This suggests that reinforcement learning may be a relevant sub-category of AI for understanding perceptual development.
Probabilistic Methods.   Explanation: The paper discusses the problem of ergodicity of transition probability matrices in Markovian models, such as hidden Markov models (HMMs), and how it affects the propagation of long-term context information and learning a hidden state representation. The paper also uses results from Markov chain theory to show how the problem of diffusion of context and credit is reduced when the transition probabilities approach 0 or 1. These concepts are all related to probabilistic methods in AI.
Neural Networks.   Explanation: The paper focuses on the use and application of neural networks in industrial settings. It discusses the requirements for information processing in modern industry and how neural networks fulfill those requirements. The paper also presents successful applications of neural networks and provides a checklist for applying them. Finally, it discusses neural network projects done by a research group. There is no mention of any other sub-category of AI in the paper.
Neural Networks.   Explanation: The paper focuses on the development of an automatic inspection system called NeuroPipe, which is based on a neural classifier. The system was trained using manually collected defect examples to detect defects like metal loss in pipelines. Therefore, the paper belongs to the sub-category of AI called Neural Networks.
Probabilistic Methods.   Explanation: The paper describes the use of a mixture of locally linear generative models for recognizing handwritten digits. The models are probabilistic in nature, as they are used to evaluate the log-likelihoods of new images under each model. The EM algorithm used for training the models is also a probabilistic method.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of a nonlinear gating network and several competing experts, each of which learns to predict the conditional mean. The experts adapt their width to match the noise level in their regime, and the gating network learns to predict the probability of each expert given the input.   Probabilistic Methods: The paper discusses the underlying statistical assumptions and derives weight update rules. The gating network predicts the probability of each expert given the input, and the experts learn to match their variances to the local noise levels, which can be viewed as matching the local complexity of the model to the local complexity of the data.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper describes the use of machine learning systems, including Golem, Magnus Assistant, and Retis, which are all examples of neural networks. These systems are used to model drug activity and achieve better results than traditional methods.  Probabilistic Methods: The paper discusses the use of machine learning tools to model the quantitative structure-activity relationship (QSAR) of drugs. This involves analyzing large amounts of data and making predictions based on probabilities, which is a key aspect of probabilistic methods in AI.
Rule Learning.   Explanation: The paper describes a rule-based approach to integrating heterogeneous databases in the HIPED system. The backend processing involves mapping queries using "facts" and "rules" that establish correspondences among the data in the databases. The approach is implemented using a deductive database system as the rule processing engine. Therefore, the paper belongs to the sub-category of AI known as Rule Learning.
Reinforcement Learning.   Explanation: The paper discusses the temporal difference methods, which are a type of reinforcement learning algorithm. The paper specifically focuses on learning to achieve dynamic goals, which is a subproblem of reinforcement learning. The DG-learning algorithm presented in the paper is a reinforcement learning algorithm that is designed to efficiently achieve dynamically changing goals. Therefore, reinforcement learning is the most related sub-category of AI to this paper.
Theory.   Explanation: The paper focuses on proving the intractability of learning several classes of Boolean functions in the distribution-free model of learning from examples. The methods used in the paper are representation independent and demonstrate an interesting duality between learning and cryptography. The paper does not discuss or apply any of the other sub-categories of AI listed in the question.
Probabilistic Methods.   Explanation: The paper describes a new method for determining the consensus sequence in DNA fragment assemblies that directly incorporates aligned ABI trace information into consensus calculations via TraceData Classifications. The method extracts and sums evidence indicated by the representation to determine consensus calls, which is a probabilistic approach to determining the consensus sequence. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Neural Networks, Rule Learning.   Neural Networks: The paper describes a connectionist model of the acquisition of morphology, which is a type of neural network. The model learns to map forms onto meanings and makes phonological generalizations that are embedded in connection weights.   Rule Learning: The paper discusses the use of suffixation, prefixation, and template rules in the experiments with artificial stimuli. The model learns to apply these rules to different morphological categories, which enables transfer.
Rule Learning, Inductive Logic Programming.   The paper presents an algorithm that combines traditional EBL techniques and recent developments in inductive logic programming to learn effective clause selection rules for Prolog programs. This falls under the sub-category of Rule Learning, which involves learning rules or decision trees from data. The algorithm also incorporates inductive logic programming, which is a subfield of machine learning that focuses on learning logic programs from examples.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper describes a model of cortical visual processing that mimics the hierarchical processing areas of the primate visual system. Each stage is constructed as a competitive network utilizing a modified Hebb-like learning rule, called the trace rule, which enables neurons to learn about whatever is invariant over short time periods in the representation of objects as the objects transform in the real world.  Probabilistic Methods: The trace rule learning algorithm used in the model is a probabilistic method that enables neurons to learn the statistical invariances about objects during their transformations, by associating together representations which occur close together in time.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes the use of recurrent neural networks with feedback into the input units for handling missing or asynchronous data.   Probabilistic Methods: The paper contrasts the proposed approach with probabilistic models (e.g. Gaussian) of the missing variables, which attempt to model the distribution of the missing variables given the observed variables.
Neural Networks, Theory.   The paper discusses the design of neural networks through transformations of objective functions. This falls under the sub-category of Neural Networks. The paper also presents a collection of algebraic transformations that can be applied to objective functions, which is a theoretical approach.
Case Based.   Explanation: The paper describes the development and implementation of four case-based design systems, which involve recalling previously known designs from memory and adapting them to fit the current design context. The focus is on the representation and use of case memory, rather than on other sub-categories of AI such as genetic algorithms, neural networks, probabilistic methods, reinforcement learning, rule learning, or theory.
Probabilistic Methods, Neural Networks, Theory.   Probabilistic Methods: The paper describes a model based on the statistical theory of Kalman filtering, which is a probabilistic method used for estimating the state of a dynamic system based on noisy measurements. The model utilizes a hierarchical network whose successive levels implement Kalman filters operating over successively larger spatial and temporal scales. The network also learns an internal model of the spatiotemporal dynamics of the input stream by adapting the synaptic weights at each hierarchical level in order to minimize prediction errors.  Neural Networks: The model described in the paper is a hierarchical network that utilizes Kalman filters at each level. The network assigns specific computational roles to the inter-laminar connections known to exist between neurons in the visual cortex. The paper also presents experimental results demonstrating the ability of this model to perform robust spatiotemporal segmentation and recognition of objects and image sequences.  Theory: The paper presents a biologically plausible model of dynamic recognition and learning in the visual cortex based on the statistical theory of Kalman filtering from optimal control theory. The model respects key neuroanatomical data such as the reciprocity of connections between visual cortical areas. The paper also provides a more detailed exposition of the model and presents experimental results demonstrating its effectiveness.
Genetic Algorithms, Reinforcement Learning, Theory.  Genetic Algorithms: The paper discusses the Baldwin effect, which is an evolutionary phenomenon that can guide a population towards areas of high fitness in genotype space. This is a form of genetic optimization that is similar to the process of genetic algorithms.  Reinforcement Learning: The paper discusses the Hiding effect, which is another interaction between learning and evolution. This effect shows that learning can reduce the selection pressure between individuals by "hiding" their genetic differences. This is similar to the process of reinforcement learning, where an agent learns to take actions that maximize a reward signal.  Theory: The paper presents a theoretical analysis of the trade-off between the Baldwin effect and the Hiding effect in determining the influence of learning on evolution. The authors also investigate two factors that contribute to this trade-off, the cost of learning and landscape epistasis, through experimental simulations.
Reinforcement Learning, Probabilistic Methods.   Reinforcement Learning is present in the text as the paper focuses on optimizing hyper-parameters for function approximators, which is a common problem in reinforcement learning. The algorithm described in the paper is designed to spend less time evaluating poor parameter settings and more time honing its estimates in the most promising regions of the parameter space, which is a key aspect of reinforcement learning.  Probabilistic Methods are also present in the text as the algorithm described in the paper is a racing algorithm for continuous optimization problems. This algorithm uses probabilistic methods to estimate the most promising regions of the parameter space and to determine which parameter settings to evaluate next.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the accuracy of function approximation using selected input features, which is a probabilistic approach to feature selection.   Rule Learning: The paper proposes three greedier algorithms to enhance the efficiency of feature selection processing, which involves creating rules to select features based on their importance. The paper also proposes using these algorithms to develop an offline handwriting recognition system, which involves creating rules to recognize characters based on their features.
Probabilistic Methods.   Explanation: The paper discusses mixture modelling, which is a probabilistic method used to model data that comes from multiple distributions. The minimum message length (MML) criterion, which is used in the paper to distinguish between overlapping and non-overlapping distributions, is also a probabilistic method. The paper does not discuss any other sub-categories of AI.
Neural Networks.   Explanation: The paper discusses the performance of the GCel-512 and PowerXPlorer for parallel neural network simulations. The entire paper is focused on the application of neural networks and their performance on different parallel processors. None of the other sub-categories of AI are mentioned or discussed in the paper.
Genetic Algorithms, Case Based.   Genetic Algorithms: The paper describes a random mutation hill climbing algorithm, which is a type of genetic algorithm that uses mutation to search for optimal solutions.   Case Based: The paper discusses the use of prototypes in nearest neighbor classification, which is a type of case-based reasoning where new instances are classified based on their similarity to previously observed instances. The algorithms described in the paper aim to find sets of prototypes that are representative of the target classes and can be used to accurately classify new instances.
Neural Networks.   Explanation: The paper presents a new self-organizing neural network model with two variants, one for unsupervised learning and the other for supervised learning. The model combines the self-organizing network with the radial basis function approach to achieve better results in classification tasks. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper mentions that COLUMBUS uses two artificial neural networks to encode the characteristics of the robot's sensors and typical environments it will face. These networks allow for knowledge transfer across different environments the robot will face over its lifetime. COLUMBUS' models represent both the expected reward and the confidence in these expectations.   Reinforcement Learning: The paper mentions that COLUMBUS' task is to explore and model the environment efficiently while avoiding collisions with obstacles. Exploration is achieved by navigating to low confidence regions. An efficient dynamic programming method is employed in the background to find minimal-cost paths that, executed by the robot, maximize exploration.
Genetic Algorithms, Theory.   Genetic Algorithms is the primary sub-category of AI that this paper belongs to, as it analyzes the performance of a Genetic Algorithm (GA) called Culling and compares it to other algorithms on a problem referred to as Additive Search Problem (ASP). The paper also discusses the failure of standard GA's to achieve implicit parallelism in a generalized version of ASP called k-ASP.   Theory is another sub-category of AI that this paper belongs to, as it provides insight into when and how GA's can beat competing methods, analyzes the optimal culling point for selective breeding, and discusses the Schema theorem in relation to ASP.
Case Based, Probabilistic Methods  Explanation:  - Case Based: The paper discusses the nearest neighbor algorithm, which is a type of case-based reasoning. The paper also presents algorithms for pruning instances from the training set, which can be seen as a form of case selection.  - Probabilistic Methods: The paper mentions that the nearest neighbor algorithm and its derivatives are often successful at generalization, which can be seen as a probabilistic approach to learning. The paper also discusses algorithms for reducing the number of instances retained in memory, which can involve probabilistic methods for selecting which instances to keep.
Reinforcement Learning.   Explanation: The paper explicitly describes a formulation of reinforcement learning and how it is applied to the multi-robot domain. The other sub-categories of AI are not mentioned or implied in the text.
Rule Learning, Theory.   The paper discusses the effectiveness of decision tree induction using the popular algorithms C4.5 and CART, which are both rule learning methods. The focus is on the greedy heuristic approach used in these algorithms, which is a theoretical concept. The paper also uses empirical experiments to test the effectiveness of the greedy heuristic, which is a combination of theory and practical application.
Theory  Explanation: The paper discusses theoretical results about input to state stabilizability, and does not involve any practical implementation or application of AI techniques.
Neural Networks.   Explanation: The paper describes an alternative formulation of attractor networks, which are a type of neural network. The paper discusses the difficulties in designing and training these networks, and proposes a new formulation that is easier to work with and interpret. The paper also presents simulation experiments to explore the behavior of these networks.
Theory.   Explanation: The paper discusses Wolpert's no-free-lunch theorems and their implications for generalisation in machine learning. It does not focus on any specific AI sub-category such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. Instead, it presents a theoretical analysis of the limitations of generalisation without domain knowledge.
Neural Networks.   Explanation: The paper discusses the limitations of traditional neural network learning, and proposes an incremental learning algorithm that can modify the network structure by adding and removing units and links. The paper also discusses the biological plausibility of incremental learning in neural networks. While other sub-categories of AI may be tangentially related to the topic, neural networks are the most directly relevant and central to the paper's content.
Probabilistic Methods.   Explanation: The paper introduces a probability model, the mixture of trees, and presents a family of efficient algorithms that use EM and the Minimum Spanning Tree algorithm to find the ML and MAP mixture of trees for a variety of priors, including the Dirichlet and the MDL priors. The paper also discusses the support for the artificial intelligence research provided by the Advanced Research Projects Agency of the Dept. of Defense and by the Office of Naval Research, indicating a focus on applied research in AI.
Neural Networks.   Explanation: The paper investigates the use of artificial neural networks (ANNs) to solve two fundamental problems in analyzing DNA sequences. The authors describe their adaptation of the approach used by Uberbacher and Mural to identify coding regions in human DNA, and compare the performance of ANNs to several conventional methods for predicting reading frames. The experiments demonstrate that ANNs can outperform these conventional approaches. Therefore, the paper primarily belongs to the sub-category of Neural Networks in AI.
Reinforcement Learning, Rule Learning.   Reinforcement learning is the main focus of the paper, as the control system learns on the basis of an external reinforcement signal which is negative in case of a collision and zero otherwise. The paper describes the algorithm for learning the correct mapping from the input (state) vector to the output (steering) signal using rules from Temporal Difference learning.   Rule learning is also present in the paper, as the system learns rules for avoiding collisions based on the reinforcement signal. The algorithm used for a discrete coding of the input state space also involves the creation of rules for mapping the input space to the output signal.
Theory  Explanation: This paper belongs to the Theory sub-category of AI. The paper discusses the conceptual difficulties of viewing knowledge transfer as a separate process from inductive learning and proposes a task analysis that situates transfer as a subprocess within induction. It does not discuss or utilize any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Neural Networks.   Explanation: The paper focuses on improving neural network learning through the incorporation of information from other networks. The experiments described in the paper specifically address the transfer of knowledge between neural networks.
Neural Networks.   Explanation: The paper discusses the use of a Backpropagation trained neural network, ALVINN, for autonomous steering of a vehicle in road and highway environments. It also explores alternative training methods for the neural network. Therefore, the paper primarily belongs to the sub-category of AI known as Neural Networks.
Neural Networks, Reinforcement Learning  Neural Networks: The paper proposes a neural network-based approach for unsupervised real-time error-based learning and control of movement trajectories. The authors describe the architecture of the proposed Vector Associative Map (VAM) and how it can be used to learn and control movement trajectories.  Reinforcement Learning: The paper discusses how the VAM can be trained using reinforcement learning techniques. The authors describe how the VAM can be used to learn from error signals and adjust movement trajectories accordingly. They also discuss how the VAM can be used to learn from positive and negative feedback to improve performance.
Probabilistic Methods.   Explanation: The paper discusses the use of probabilistic models for decision support systems and describes a Bayesian approach for constructing finite mixture models from sample data. The models used need to be probabilistic in nature, and the output of a model has to be a probability distribution, not just a set of numbers. The approach is based on a two-phase unsupervised learning process, and the overfitting problem common to many traditional learning approaches can be avoided, as the learning process automatically regulates the complexity of the model.
The paper does not belong to any specific sub-category of AI as it is a technical report and does not focus on any particular AI technique or method. Therefore, none of the options apply.
Neural Networks, Theory.   Neural Networks: The paper introduces and analyzes a new algorithm for linear classification which combines Rosenblatt's perceptron algorithm with Helmbold and Warmuth's leave-one-out method. The perceptron algorithm is a type of neural network that learns to classify input data into different categories.   Theory: The paper discusses the theoretical aspects of the algorithm, including its relationship to Vapnik's maximal-margin classifier and its efficiency in terms of computation time. The authors also show that the algorithm can be efficiently used in very high dimensional spaces using kernel functions.
Theory.   Explanation: This paper explores the concept of simultaneous multithreading (SMT) as an alternative architecture for parallel processing, which allows multiple threads to compete for and share all of the processor's resources every cycle. The paper discusses how SMT can use both instruction-level parallelism (ILP) and thread-level parallelism (TLP) interchangeably to accommodate variations in parallelism. However, the paper does not discuss any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. Therefore, the paper belongs to the sub-category of Theory.
Probabilistic Methods.   Explanation: The paper presents a framework for building probabilistic automata using Gibbs distributions to model state transitions and output generation. The EM algorithm with a generalized iterative scaling procedure is used for parameter estimation. The paper also discusses relations with certain classes of stochastic feedforward neural networks, but this is not the main focus of the paper. The other sub-categories of AI (Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, Theory) are not directly relevant to the content of the paper.
Neural Networks, Reinforcement Learning  Neural Networks: The paper discusses a new method for converging in the SDM memory, which utilizes neural networks. Specifically, it mentions the use of a "neural network-based convergence algorithm" and describes how the method involves "training a neural network to predict the next state of the SDM memory."  Reinforcement Learning: The paper also mentions the use of reinforcement learning in the context of the new method for converging in the SDM memory. It describes how the method involves "using reinforcement learning to guide the convergence process" and mentions the use of a "reward function" to incentivize the neural network to converge to the desired state.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods: The paper discusses the use of bagging and boosting techniques in regression, which are probabilistic methods that involve building a committee of regressors.   Rule Learning: The paper uses regression trees as fundamental building blocks in both bagging and boosting committee machines. Regression trees are a type of rule-based learning algorithm.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses how NEULA processes imprecise or incomplete information using approximate probabilistic reasoning.   Neural Networks: The paper describes how NEULA is a hybrid neural-symbolic expert system shell that uses neural networks to perform pattern recognition operations even in noisy environments.
Genetic Algorithms.   Explanation: The paper discusses co-evolutionary simulations, which involve the evolution of multiple populations of organisms that interact with each other. The authors use genetic algorithms to track the progress of adaptation in these simulations, specifically focusing on the "Red Queen" hypothesis, which suggests that co-evolving populations must constantly adapt to maintain their relative fitness. The paper describes how the authors measure adaptive progress using fitness landscapes and other metrics, and how they use genetic algorithms to optimize the parameters of their simulations. Overall, the paper is primarily focused on the use of genetic algorithms in co-evolutionary simulations.
Probabilistic Methods.   Explanation: The paper proposes the use of flexible parametric models, specifically mixtures of normals, to accommodate departures from standard parametric models in measurement error models. This approach involves probabilistic methods for modeling the errors and estimating the parameters.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper investigates the use of multi-parent reproduction in genetic algorithms and conducts experiments on function optimization problems using genetic algorithms.  Theory: The paper provides a theoretical foundation for the use of multi-parent operators in genetic algorithms by showing how these operators work on distributions.
Probabilistic Methods. This paper belongs to the sub-category of probabilistic methods because it uses Bayesian methods to select covariates in hierarchical models of hospital admission counts. The authors use Bayes factors to compare models with different sets of covariates and select the model with the highest Bayes factor as the best model. The paper also discusses the use of prior distributions and Markov chain Monte Carlo methods in Bayesian analysis.
Neural Networks.   Explanation: The paper provides a tutorial overview of neural networks, specifically focusing on backpropagation networks as a method for approximating nonlinear multivariable functions. The paper discusses the advantages of neural networks compared to other modern regression techniques, and provides examples of their successful use in various applications. The paper does not discuss any other sub-categories of AI.
Probabilistic Methods, Reinforcement Learning, Theory.   Probabilistic Methods: The paper discusses the use of probabilistic models for information filtering and selection mechanisms in learning systems. For example, it mentions the use of Bayesian networks for modeling uncertainty in decision-making.  Reinforcement Learning: The paper discusses the use of reinforcement learning for selecting relevant information in learning systems. It mentions the use of reward-based mechanisms to guide the selection process.  Theory: The paper presents a theoretical framework for understanding information filtering and selection mechanisms in learning systems. It discusses the concept of generalization as search and how it can be used to guide the selection process. It also discusses the role of selection mechanisms in improving the performance of learning systems.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper discusses the use of Gaussian regression, which is a probabilistic method for modeling the relationship between input and output variables. The authors also mention the use of Bayesian methods for model selection and regularization.  Theory: The paper presents theoretical results on the convergence of the optimal finite-dimensional linear model to the true underlying function as the number of basis functions increases. The authors also discuss the trade-off between model complexity and generalization performance, which is a fundamental concept in machine learning theory.
Probabilistic Methods.   The paper discusses the use of Gaussian probability functions for classification, which is a probabilistic method. The author also mentions the use of Parzen window estimation, which is another probabilistic method. The paper focuses on the estimation of probability density functions and the use of these estimates for classification, which is a key aspect of probabilistic methods in AI.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses using decision trees to predict the lifetime of dynamically allocated objects. Decision trees are a probabilistic method used for classification and regression tasks.   Rule Learning: The paper also mentions that during training, a large number of features can be used and the decision tree will automatically choose the relevant subset. This is an example of rule learning, where the algorithm learns which features are important for making accurate predictions.
Genetic Algorithms.   Explanation: The paper describes an approach using an evolutionary system with variable length coding, which is a characteristic of genetic algorithms. The system learns a more efficient, problem-specific coding by identifying successful combinations of genes in the population and combining them into higher-level evolved genes. This is also a common technique used in genetic algorithms. The paper does not mention any other sub-categories of AI.
Rule Learning, Theory.   Explanation: The paper discusses the design and analysis of learning algorithms for Boolean concepts, which falls under the sub-category of Rule Learning. The paper also presents theoretical upper bounds on the coverage of any learning algorithm and describes two algorithms that approach this bound, which falls under the sub-category of Theory. The other sub-categories of AI (Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning) are not directly relevant to the content of this paper.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses the use of temporal Bayesian networks as a structured representation of MDPs, which allows for the exploitation of variable and propositional independencies.   Reinforcement Learning: The paper presents an algorithm, called structured policy iteration (SPI), for constructing optimal policies in MDPs without explicit enumeration of the state space. The algorithm is based on the commonly used modified policy iteration algorithm and can be used in conjunction with recent approximation methods.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses a new class of multilayer connectionist architectures known as ASOCS, which is based on networks of adaptive digital elements that attempt to learn an adaptive set of arbitrary vector mappings.   Rule Learning: The paper describes how ASOCS enters function specification incrementally by use of rules, rather than complete input-output vectors, and how a processing network is able to extract critical features from a large environment and give output in a parallel fashion. Learning also uses parallelism and self-organization such that a new rule is completely learned in time linear with the depth of the network.
Probabilistic Methods.   Explanation: The paper proposes a method for maximum working likelihood inference using Markov chain Monte Carlo (MCMC) and Monte Carlo quadrature, which are probabilistic methods commonly used in Bayesian inference. The paper also discusses the consistency and asymptotic normality of the proposed method, which are concepts related to the convergence of posterior distributions in probabilistic methods.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the response properties of neurons at early stages of the visual system and how a network that learns sparse codes of natural scenes develops localized, oriented, bandpass receptive fields similar to those in the primate striate cortex.  Probabilistic Methods: The paper discusses the statistical regularities present in natural images and how these can be used to code images more efficiently. It also mentions that many of the important forms of structure require higher-order statistics to characterize, which makes models based on linear Hebbian learning or principal components analysis inappropriate for finding efficient codes for natural images. The paper suggests that maximizing the sparseness of the representation is a good objective for efficient coding of natural scenes.
Genetic Algorithms.   Explanation: The paper explicitly mentions the use of problem generators to study the behavior of evolutionary algorithms, and specifically focuses on the effects of epistasis on the performance of genetic algorithms. The other sub-categories of AI are not mentioned or relevant to the content of the paper.
Genetic Algorithms.   Explanation: The paper discusses the use of genetic algorithms and specifically focuses on the crossover operator used in these algorithms. It does not discuss any other sub-category of AI.
Theory.   Explanation: This paper is focused on examining the complexity of different variants of conditional logics, which is a theoretical topic in artificial intelligence. The paper does not discuss the application of these logics to any specific AI subfield such as case-based reasoning, neural networks, or reinforcement learning.
Neural Networks.   Explanation: The paper describes a neural network architecture that combines two properties found to be useful for learning sequential tasks: higher-order connections and incremental introduction of new units. The network adds higher orders when needed by adding new units that dynamically modify connection weights. The paper also compares the performance of this architecture with recurrent networks in experiments with the Reber grammar.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes a principle for unsupervised learning of distributed non-redundant internal representations of input patterns based on adaptive predictors for each representational unit. The paper also discusses various implementations of the principle for finding binary factorial codes.   Probabilistic Methods: The paper focuses on finding factorial codes, which are codes where the probability of the occurrence of a particular input is simply the product of the probabilities of the corresponding code symbols. The paper proposes methods for finding factorial codes automatically using Occam's razor for finding codes using a minimal number of units. The paper also discusses how such codes are potentially relevant for segmentation tasks, speeding up supervised learning, and novelty detection.
Theory.   Explanation: The paper primarily focuses on theoretical aspects of learning in the PAC model with faulty oracles, specifically exploring the use of statistical queries and giving necessary and sufficient conditions for efficient learning with various types of distribution noise. While the paper does touch on some practical applications of these theoretical results, such as expanding the class of distributions on which we can weakly learn monotone Boolean formulae, the majority of the content is theoretical in nature. Therefore, the paper belongs to the sub-category of AI known as Theory.
Reinforcement Learning, Genetic Algorithms.   Reinforcement Learning is present in the text as the learning architecture used is a one step Q-learning using look-up table, where the inherent parameters are initial Q-values, learning rate, discount rate of rewards, and exploration rate. The fitness measure used is also based on the number of times the individual achieves the goal in the later half of life.   Genetic Algorithms are present in the text as learners evolve through a genetic algorithm based on the fitness measure. The paper describes how the genetic algorithm is used to optimize the parameter values in Q-learning.
Probabilistic Methods, Reinforcement Learning, Theory.   Probabilistic Methods: The paper deals with partially observable Markov decision processes (pomdps), which are probabilistic models.  Reinforcement Learning: The paper discusses dynamic-programming updates, which are a crucial operation in a wide range of pomdp solution methods, including reinforcement learning.  Theory: The paper examines the problem of performing exact dynamic-programming updates in pomdps from a computational complexity viewpoint and offers a new algorithm, called the witness algorithm, which can compute updated value functions efficiently on a restricted class of pomdps.
Theory.   Explanation: This paper does not belong to any of the sub-categories of AI listed. It is a theoretical study of the limits of instruction level parallelism in programs, specifically the SPEC95 benchmark suite. The paper does not involve the application of any AI techniques or algorithms.
Probabilistic Methods.   Explanation: The paper presents a framework for building probabilistic automata using Gibbs distributions to model state transitions and output generation. The parameter estimation is carried out using an EM algorithm, which is a common probabilistic method for estimating parameters in statistical models. The paper also discusses relations with certain classes of stochastic feedforward neural networks, but this is not the main focus of the paper. Therefore, while Neural Networks may be a related sub-category, it is not as closely related as Probabilistic Methods. The other sub-categories (Case Based, Genetic Algorithms, Reinforcement Learning, Rule Learning, Theory) are not present in the text.
Neural Networks, Theory.   Neural Networks: The paper discusses models of unsupervised correlation-based synaptic plasticity, which are a type of neural network. The paper also discusses the effects of constraints on the dynamics of these networks.  Theory: The paper presents theoretical analysis of the effects of different types of constraints on the dynamics of neural networks. It also discusses different methods of enforcing constraints and their implications.
Reinforcement Learning, Theory.   Reinforcement learning is the main topic of the paper, as it discusses the convergence of reinforcement learning algorithms.   Theory is also applicable as the paper provides a rigorous proof of convergence of these DP-based learning algorithms by relating them to the powerful techniques of stochastic approximation theory via a new convergence theorem.
Reinforcement Learning, Theory  Reinforcement Learning is the primary sub-category of AI that this paper belongs to. The paper compares the performance of two optimization algorithms, Gradient Descent and Exponentiated Gradient Descent, in both supervised and reinforcement learning settings. The reinforcement learning experiments involve training an agent to play a game, which is a classic example of reinforcement learning.  Theory is also a relevant sub-category, as the paper discusses the theoretical properties of the two optimization algorithms and how they relate to the performance observed in the experiments. The paper provides mathematical proofs and analysis to support its conclusions, which is a hallmark of theoretical work in AI.
Probabilistic Methods.   Explanation: The paper discusses objective functions within a Bayesian learning framework, which is a probabilistic method for machine learning. The criteria for data selection are also based on probabilistic assumptions about the hypothesis space.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes an incremental feature map algorithm that adds nodes and connections to a regular, 2-dimensional grid according to the input distribution, resulting in a map that represents the cluster structure of the high-dimensional input. This is a common approach in neural network algorithms, where nodes and connections are added or removed based on the input data.  Probabilistic Methods: The paper discusses the problem of representing the structure of clusters in high-dimensional input data with unknown distribution. This is a common problem in probabilistic methods, where the goal is to model the probability distribution of the input data. The proposed algorithm explicitly represents the cluster structure of the input data, which is a key aspect of probabilistic methods.
Probabilistic Methods.   Explanation: The paper discusses the use of Lattice Conditional Independence (LCI) models for the analysis of multivariate normal data, which is a probabilistic method. The paper also discusses the class of graphical Markov models determined by acyclic digraphs (ADGs), which is another probabilistic method. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper focuses on using genetic algorithms for design optimization. It discusses the use of crossover and mutation operators, population size, and fitness functions in the genetic algorithm.   Reinforcement Learning: The paper proposes a method for incorporating reinforcement learning into the genetic algorithm to improve its performance. The reinforcement learning component is used to learn to be selective in the design optimization process, by assigning rewards to individuals in the population that contribute to the overall fitness of the population.
Genetic Algorithms.   Explanation: The paper explicitly describes the use of genetic algorithms for optimization in engineering design domains, and discusses the development of new GA operators and strategies tailored to these domains. There is no mention of any other sub-category of AI in the text.
Neural Networks.   Explanation: The paper is specifically about learning internal representations using error propagation in neural networks. The authors discuss the architecture and training of neural networks in detail, and do not address any other sub-category of AI.
Theory  Explanation: This paper discusses the theoretical suitability of using evolutionary trees as a universal model for multiple sequence alignment. It does not focus on the implementation or application of any specific AI sub-category such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Neural Networks, Rule Learning.   Neural Networks: The paper proposes a network model that uses local rules to learn mappings which are not linearly separable. The network has feedforward and feedback connections, and during learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedforward connectivity. During recall, sensory input activates the self-organized representation, and activity generates the learned response.  Rule Learning: The paper proposes a mechanism for selective suppression of transmission at feedback synapses during learning, which allows for combining associative feedback with self-organization of feedforward synapses. The network uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input, and feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedforward connectivity. During recall, suppression is removed, and activity generates the learned response.
Probabilistic Methods.   Explanation: The paper discusses a Markov chain Monte Carlo method for sampling from a distribution based on its density function. Markov chain Monte Carlo methods are a type of probabilistic method used in AI and statistics to generate samples from a probability distribution. The paper specifically discusses slice sampling, which is a variation of Markov chain Monte Carlo methods that involves alternating uniform sampling in the vertical direction with uniform sampling from the horizontal slice defined by the current vertical position. This approach is often easier to implement than Gibbs sampling, another popular Markov chain Monte Carlo method, and may be more efficient than easily-constructed versions of the Metropolis algorithm. The paper also discusses overrelaxed versions of slice sampling, which can improve sampling efficiency by suppressing random walk behavior. Overall, the paper's focus on probabilistic methods for sampling from a distribution makes it most closely related to the sub-category of Probabilistic Methods in AI.
Probabilistic Methods, Reinforcement Learning, Theory.   Probabilistic Methods: The paper discusses Markov decision problems (MDPs), which are probabilistic models used in AI research.  Reinforcement Learning: The paper mentions that MDPs are of interest to AI researchers studying reinforcement learning.  Theory: The paper summarizes results regarding the complexity of solving MDPs and the running time of MDP solution algorithms, and argues that more study is needed to reveal practical algorithms for solving large problems quickly. The paper also suggests alternative methods of analysis that rely on the structure of MDPs.
Reinforcement Learning, Neural Networks - The paper primarily focuses on reinforcement learning and proposes a design that allows a connectionist Q-learner to accept advice from an external observer. The approach is based on techniques from knowledge-based neural networks, where the advice is inserted directly into the agent's utility function.
Probabilistic Methods.   Explanation: The paper presents a method for condensing information in a protein database into a mixture of Dirichlet densities, which are probabilistic models. These mixtures are used to estimate expected amino acid probabilities at each position in a statistical model, which improves the model's generalization capacity. The paper contains complete derivations of the Dirichlet mixture formulas and methods for optimizing the mixtures to match particular databases, which are all probabilistic methods.
Case Based, Rule Learning  Explanation:   This paper belongs to the sub-category of Case Based AI because it discusses the technique of reusing past problem solving experiences to improve performance, which is a key aspect of case-based reasoning. Additionally, the paper proposes adaptation strategies for overcoming mismatches between past experiences and new problems, which is a common challenge in case-based reasoning.  The paper also belongs to the sub-category of Rule Learning because it discusses the use of adaptation strategies for overcoming mismatches, which involves learning rules for how to adapt past experiences to new problems. Additionally, the empirical study compares the performance of different approaches, which involves learning rules for when to use each approach.
Neural Networks, Theory.   Neural Networks: The paper discusses the use of second-order recurrent neural networks as dynamical recognizers for formal languages. It also presents an empirical method for testing whether the language induced by the network is regular or not.  Theory: The paper explores the capabilities of second-order recurrent neural networks in inducing languages, including non-regular ones. It also provides a detailed "-machine analysis of trained networks for both regular and non-regular languages.
Rule Learning, Neural Networks.   Rule Learning is present in the text as the algorithm presented in the paper is for inducing decision trees with multivariate tests at internal decision nodes. This is a classic example of rule learning, where the algorithm learns rules to make decisions.   Neural Networks are present in the text as the algorithm constructs each test by training a linear machine. Linear machines are a type of neural network that can be used for classification tasks.
Genetic Algorithms.   Explanation: The paper describes the use of evolutionary techniques, specifically genetic algorithms, to evolve physical structures made out of Lego parts. The authors provide a fitness function and a model of physical reality to guide the evolution process, and they use a simulator to evaluate the feasibility and functionality of the evolved structures. The paper also discusses the limitations of simulation and the need for a margin of safety in the design process, which are common themes in genetic algorithm research.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the need to merge knowledge bases that may have differing opinions, and the objective of integration is to construct one system that exploits all the knowledge that is available and has good performance. This involves probabilistic reasoning to combine the knowledge from different sources.  Rule Learning: The paper describes the methodology of knowledge integration, which involves constructing an integrated knowledge base from several separate sources. This involves learning rules from the separate knowledge bases and combining them to form a single system. The implemented system (INTEG.3) is also described, which uses rule-based reasoning to integrate knowledge.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms are present in the text as the paper discusses the use of evolutionary algorithms to generate self-supporting structures. The authors state that "the structures are generated using a genetic algorithm that evolves a population of candidate solutions."   Reinforcement Learning is also present in the text as the authors mention the use of a "fitness function" to evaluate the effectiveness of the evolved structures. This fitness function is used to provide feedback to the genetic algorithm, allowing it to improve the structures over time through reinforcement learning.
Probabilistic Methods.   Explanation: The paper describes a compression algorithm for probability transition matrices, which is a probabilistic method used in various fields such as physics, engineering, and computer science. The algorithm compresses the matrix while maintaining its probabilistic nature, which is a key characteristic of probabilistic methods. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods. This paper discusses exact sampling methods for Markov chain Monte Carlo simulations, which are probabilistic methods commonly used in Bayesian inference. The paper specifically focuses on constructing general purpose algorithms for Bayesian computation using these methods.
Case Based, Reinforcement Learning  Explanation:  This paper belongs to the sub-category of Case Based AI because it discusses instance-based learning methods, which are a type of case-based reasoning. The paper also belongs to the sub-category of Reinforcement Learning because it discusses the advantages of instance-based methods for autonomous systems, which often use reinforcement learning techniques.
Theory.   Explanation: The paper presents a theoretical analysis of a variant of the standard on-line learning model, called the "apple tasting" model, and proposes a strategy for trading between false acceptances and false rejections in the standard model. The paper does not involve any implementation or application of specific AI techniques such as neural networks or reinforcement learning.
Case Based, Theory  Explanation:   1. Case Based: The paper describes an algorithm that uses past errors to create partitions in the domain being approximated. This is similar to how case-based reasoning systems use past cases to make decisions. However, the paper does not explicitly mention case-based reasoning or any specific case-based reasoning techniques.  2. Theory: The paper proposes a new algorithm that uses the error distribution generated by a learning algorithm to create piecewise learnable partitions. The paper discusses the importance of the error distribution and how it can be used to improve the learning process. The paper also describes the algorithm in detail and provides experimental results. This is an example of theoretical research in AI.
Neural Networks.   Explanation: The paper discusses a distributed neurosimulator designed for neural networks, and the challenges of running large neural networks on current execution platforms. The paper also mentions that the design of PREENS allows for neural networks to be run on high performance MIMD machines. Therefore, the paper is primarily focused on neural networks and their simulation.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper proposes new genetic operators, such as Mutespec, to improve the performance of the learning classifier system. The paper also discusses the interplay between the reward system and the background genetic algorithm.  Reinforcement Learning: The paper presents simulation results of an agent learning to follow a light source in a two-dimensional world, which is an example of reinforcement learning. The paper also discusses the difficulty of regulating the reward system in the learning classifier system.
Reinforcement Learning, Rule Learning  Explanation:  The paper describes a classifier system that uses reinforcement learning to play a simple board game. Reinforcement learning is a sub-category of AI that involves an agent learning to make decisions based on feedback from its environment. In this case, the agent (the classifier system) learns to make moves in the board game based on the feedback it receives from the game board.  The paper also mentions rule learning, which is another sub-category of AI that involves learning rules or patterns from data. The classifier system in the paper uses a set of rules to make decisions about which moves to make in the game. These rules are learned through a process of trial and error, as the system receives feedback on the success or failure of its moves.
Neural Networks.   Explanation: The paper discusses a method for training neural networks by minimizing the amount of information in the weights, which is a common technique in the field of neural networks. The paper also describes how to compute the derivatives of the expected squared error and the amount of information in the weights in a network with non-linear hidden units, which is a key aspect of training neural networks.
Reinforcement Learning, Theory.   Reinforcement learning is present in the paper as the authors propose an on-line learning algorithm based on the "Hedge" algorithm for finding a good linear combination of ranking "experts." This algorithm is a form of reinforcement learning as it involves learning from feedback in the form of preference judgments.   Theory is also present in the paper as the authors discuss the problem of finding the ordering that agrees best with a preference function, which they show is NP-complete even under very restrictive assumptions. They then propose a simple greedy algorithm that is guaranteed to find a good approximation. The paper also discusses the theoretical basis for their approach and the mathematical framework for their algorithms.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses Dynamic Conditional Independence Models (DCIMs), which are probabilistic graphical models used to represent conditional independence relationships between variables over time. The authors also use Markov Chain Monte Carlo (MCMC) methods to estimate the parameters of the DCIMs.  Theory: The paper presents theoretical results on the identifiability of DCIMs and the convergence properties of MCMC methods for estimating their parameters. The authors also provide a detailed description of the DCIM framework and its application to real-world data.
Genetic Algorithms, Theory.   The paper primarily discusses the relation between evolutionary techniques and search methods, specifically numerical and classical search methods. This falls under the category of Genetic Algorithms, which are a type of evolutionary algorithm. The paper also presents a more general search strategy, which can be considered a theoretical contribution to the field.
Neural Networks. The paper presents a neural net architecture that can discover hierarchical and recursive structure in symbol strings. The architecture has only one layer of modifiable weights, allowing for a straightforward interpretation of its behavior. The paper also discusses the difficulty of extracting multilevel structure from complex, extended sequences, which is a problem that has been studied by other researchers in the field of neural networks.
Neural Networks. This paper belongs to the sub-category of Neural Networks. The paper discusses the implementation of self-organizing feature maps using biologically plausible methods, which involve neural processes such as lateral inhibition and synaptic resource redistribution. The paper also discusses the use of similarity measures and weight adaptation, which are key components of neural network models.
Probabilistic Methods.   Explanation: The paper discusses the use of graphical models, which are a type of probabilistic model, in applied mathematical multivariate statistics. The author discusses the use of Bayesian networks and Markov random fields as examples of graphical models. The paper does not mention any other sub-categories of AI.
Reinforcement Learning, Imitation Learning.   Reinforcement learning is the main focus of the paper, as the authors describe an algorithm that integrates imitation with Q-learning. The IQ-algorithm is a form of reinforcement learning that allows agents to learn how to act by observing or imitating other agents.   Imitation learning is also a key aspect of the paper, as the IQ-algorithm uses observations of an expert agent to bias exploration in promising directions. The algorithm allows for transfer between agents with different objectives and abilities, which is a form of imitation learning.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents a hybrid neural network solution for face recognition, which combines a self-organizing map neural network and a convolutional neural network. The authors explain how the self-organizing map provides dimensionality reduction and invariance to minor changes in the image sample, while the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. They also discuss how the convolutional network extracts successively larger features in a hierarchical set of layers.  Probabilistic Methods: The authors mention that the recognizer provides a measure of confidence in its output, and that classification error approaches zero when rejecting as few as 10% of the examples. They also analyze computational complexity and discuss how new classes could be added to the trained recognizer. These aspects suggest the use of probabilistic methods in the approach.
Reinforcement Learning.   Explanation: The paper presents a new algorithm for solving Markov decision problems using reinforcement learning techniques. It extends the modified policy iteration algorithm and introduces asynchronous updates and a modified policy evaluation operator. The paper discusses the convergence properties of the algorithm and its ability to handle more general initial conditions. Therefore, the paper belongs to the sub-category of Reinforcement Learning in AI.
Theory.   Explanation: This paper is a philosophical analysis of the concepts of causation, action, and counterfactuals, and does not involve any specific AI techniques or applications. Therefore, it does not belong to any of the sub-categories listed. However, it can be considered as a theoretical foundation for AI research that deals with causality and decision-making.
Probabilistic Methods.   Explanation: The paper discusses the use of Markov chain Monte Carlo (MCMC) sampling methods for determining properties of a posterior distribution, which is a probabilistic method commonly used in Bayesian computation. The paper also proposes an importance weighted marginal density estimation (IWMDE) method, which is another probabilistic method for estimating marginal posterior densities. The paper does not discuss any other sub-categories of AI.
Theory. The paper focuses on characterizing the complexity of noise-tolerant learning in the PAC model, which is a theoretical framework for machine learning. The paper presents lower bounds and an algorithm for this problem, which are theoretical results. There is no mention of any specific AI techniques such as neural networks or reinforcement learning.
Probabilistic Methods.   Explanation: The paper compares different non-hierarchical unsupervised classifiers using Monte Carlo simulations, which involve generating random samples based on probability distributions. The classifiers being compared are all probabilistic models, including mixture models, latent class models, and factor analysis models. The paper also discusses the use of Bayesian model selection criteria to evaluate the performance of the classifiers. Therefore, the paper is primarily focused on probabilistic methods for unsupervised classification.
Genetic Algorithms.   Explanation: The paper describes simulations that study the effect of modifier genes on suppressing the short-sighted development of virulence. This involves the use of genetic algorithms, which are a type of optimization algorithm inspired by the process of natural selection. The simulations involve the evolution of mutation rates, which is a key aspect of genetic algorithms.
Genetic Algorithms, Reinforcement Learning  The paper belongs to the sub-categories of Genetic Algorithms and Reinforcement Learning.   Genetic Algorithms are present in the paper as the authors propose an evolving visual routines architecture that uses a genetic algorithm to optimize the visual routines. The genetic algorithm is used to evolve the parameters of the visual routines, such as the number of layers and the number of neurons in each layer, to improve the performance of the system.  Reinforcement Learning is also present in the paper as the authors use a reinforcement learning approach to train the visual routines. The system is trained using a reward signal that is based on the accuracy of the system's predictions. The authors use a Q-learning algorithm to update the weights of the neural network based on the reward signal.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper describes a method for computing a bounding envelope of a multivariate monotonic function using a special neural network that is guaranteed to produce only monotonic functions.   Probabilistic Methods: The derived envelope is computed by determining a simultaneous confidence band for the neural network, which involves probabilistic methods.
Case Based, Probabilistic Methods  Explanation:  - Case Based: The paper describes the use of Memory-Based Learning, which is a type of Case-Based reasoning where examples are stored in memory and used for generalization.  - Probabilistic Methods: The paper compares Memory-Based Learning with several statistical methods that are well-suited to large numbers of features, which are typically probabilistic in nature. The evaluation of the methods is also based on probabilistic metrics such as precision and recall.
Probabilistic Methods.   Explanation: The paper discusses the use of hierarchical Bayesian mixture models for data analysis and inference on neural synaptic transmission characteristics. Bayesian methods are a type of probabilistic method that involve the use of prior distributions to model uncertainty about parameters, and posterior distributions to update these distributions based on observed data. The paper also mentions the use of stochastic simulation for posterior analysis, which is a common technique in Bayesian inference.
Probabilistic Methods, Memory-Based Learning.   Probabilistic Methods: The paper mentions that machine learning techniques, including memory-based learning, offer the tools to meet the need for efficient and accurate NLP modules. Machine learning is a probabilistic method that involves learning from data and making predictions based on probabilities.   Memory-Based Learning: The paper focuses on the use of memory-based learning (MBL) for developing NLP modules. The examples presented in the paper are all trained using MBL, and the authors argue that MBL is applicable to a large class of other NLP tasks. MBL is a type of machine learning that involves storing and retrieving instances from memory to make predictions.
Theory. The paper presents theoretical results and algorithms for identifying read-once boolean formulas over generalized bases, without any practical implementation or application. The paper does not involve any learning from data or experience, which are the main focus of other sub-categories of AI such as Neural Networks, Reinforcement Learning, and Probabilistic Methods.
Probabilistic Methods.   Explanation: The paper discusses a time series model that involves Markov chains and utilizes variational approximations to deal with intractability. These are all characteristics of probabilistic methods in AI.
Probabilistic Methods.   Explanation: The paper discusses stochastic simulation algorithms for dynamic probabilistic networks, which are used to represent stochastic temporal processes. The algorithms presented in the paper are based on probabilistic methods such as likelihood weighting and use evidence observed at each time step to improve the accuracy of the simulations. There is no mention of case-based reasoning, genetic algorithms, neural networks, reinforcement learning, rule learning, or theory in the paper.
Probabilistic Methods.   Explanation: The paper describes the use of a probabilistic method, specifically the Simulated Annealing Search technique, to find the best solution by minimising the energy. The probability of accepting changes is given by a formula that includes the change in energy, a constant, and the temperature. The temperature is progressively reduced using a cooling schedule, allowing smaller changes until the system solidifies at a low energy solution. There is no mention of any other sub-categories of AI in the text.
Rule Learning, Case Based.   Rule Learning is the most related sub-category as the paper investigates the error-proneness of small disjuncts in inductive learning, which is a type of rule learning. The paper also discusses the impact of attribute noise, missing attributes, class noise, and training set size on rule learning.   Case Based is the second most related sub-category as the paper discusses the impact of rare cases within a domain on inductive learning. This is similar to case-based reasoning, where past cases are used to solve new problems.
Theory.   Explanation: The paper discusses the theoretical analysis of learning algorithms that use membership and equivalence queries to identify unknown functions from various classes. It does not involve the implementation or application of any specific AI technique such as neural networks, genetic algorithms, or reinforcement learning.
Probabilistic Methods, Theory  The paper belongs to the sub-category of Probabilistic Methods because it uses probabilistic models to learn unions of rectangles. The authors use a Bayesian approach to model the probability of a point belonging to a rectangle and then use this model to learn the union of rectangles that best fits the data.  The paper also belongs to the sub-category of Theory because it provides a theoretical analysis of the problem of learning unions of rectangles. The authors prove that the problem is NP-hard and provide an algorithm that approximates the optimal solution. They also provide bounds on the approximation ratio of their algorithm.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms: The paper discusses the use of evolution strategies, which are a type of genetic algorithm. It explains how these strategies involve generating a population of candidate solutions and then selecting the best ones to reproduce and create the next generation.   Probabilistic Methods: The paper also discusses the use of probability distributions in evolution strategies. It explains how these distributions are used to generate new candidate solutions and how they can be adapted over time to improve the search process.
Rule Learning, Theory.   Explanation: The paper presents a learning algorithm for rule-based concept representations, specifically ripple-down rule sets. The focus is on developing a theoretical framework for representing concepts with local exceptions, and the algorithm is based on a greedy approximation method for the weighted set cover problem. There is no mention of case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Rule Learning, Theory.   Explanation: The paper presents an algorithm for learning sets of rules organized into hierarchical levels, where each level contains a set of rules of the form "if c then l". The algorithm uses a greedy heuristic for weighted set cover to learn the rules, and the paper provides a theoretical analysis of the algorithm's performance. Therefore, the paper belongs to the sub-category of Rule Learning. Additionally, the paper presents a theoretical analysis of the algorithm's performance, which falls under the sub-category of Theory.
This paper does not belong to any of the sub-categories of AI listed. It is a remote sensing study that uses Synthetic Aperture Radar (SAR) imagery to characterize carbon dynamics in a northern forest. While remote sensing technology may involve some AI techniques, such as image classification using neural networks, this paper does not explicitly discuss or apply any AI methods.
Neural Networks.   Explanation: The paper investigates a neural network model of the mapping from orthography to semantics, and discusses issues related to the representation and methodology of this model. The paper also reports findings related to the behavior of the network, including the effect of semantic neighborhood density on response times, and the impact of changing the stopping criterion on the results. Therefore, the paper is primarily focused on neural network methods and their application to natural language processing.
Probabilistic Methods, Theory  Probabilistic Methods: The paper discusses the use of probability theory in predicting the behavior of composite geometric concepts. It mentions the use of Bayesian networks and Markov models to model the relationships between different geometric concepts.  Theory: The paper presents a theoretical framework for understanding the predictability of composite geometric concepts. It discusses the use of polynomial regression to model the relationships between different geometric concepts and the use of principal component analysis to reduce the dimensionality of the data. The paper also discusses the theoretical implications of the results, such as the importance of considering the interactions between different geometric concepts.
Case Based.   Explanation: The paper focuses on the use of Case-Based Reasoning (CBR) as a form of "caching" solved problems to speed up later problem solving. The paper discusses the utility problem associated with caching cases and the construction of a cost model to predict the effect of changes to the case memory. These are all related to the use of CBR. There is no mention of any other sub-category of AI in the text.
Genetic Algorithms.   Explanation: The paper explicitly mentions the use of a Genetic Algorithmic approach to vector quantizer design, which is referred to as the Genetic Generalized Lloyd Algorithm (GGLA). The paper also discusses experiments with various alternative design choices using this approach. There is no mention of any other sub-category of AI in the text.
Case Based.   Explanation: The paper is specifically about case-based planning (CBP) and the development of a CBP system called CaPER. The paper discusses the advantages of CBP over generative planning and the challenges of implementing CBP systems, but the focus is on the use of previously generated plans (cases) to solve similar planning problems in the future. The paper does not discuss genetic algorithms, neural networks, probabilistic methods, reinforcement learning, rule learning, or theory.
Probabilistic Methods.   Explanation: The paper focuses on the nonparametric maximum likelihood estimator for double censoring, which is a probabilistic method used in survival analysis. The authors discuss the likelihood function and its properties, as well as the computation of the estimator using the EM algorithm. The paper also includes simulation studies and real data examples to demonstrate the performance of the estimator. There is no mention of other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper presents computational results from a high level parallel Genetic Algorithm that utilizes the method of stripe decomposition for assigning processors to tasks associated with the cells of a rectangular uniform grid.   Theory: The paper presents a theoretical analysis of the method of stripe decomposition and proves that under some mild assumptions, as the problem size grows large in all parameters, the error bound associated with this feasible solution approaches zero.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses a statistical model of uncertainty in the world, which is a key component of probabilistic methods.  Reinforcement Learning: The paper discusses the use of exploration bonuses in reinforcement learning and how to compute suboptimal estimates based on a certainty equivalence approximation arising from a form of dual control. The paper also mentions Sutton's work on exploration bonuses in reinforcement learning.
Neural Networks, Theory.   Neural Networks: The paper specifically discusses the fitting of data using single-hidden layer neural networks with sigmoidal activation functions. It provides an upper bound for the number of critical points for this type of network.  Theory: The paper presents a general result for the countability and finiteness of the set of functions giving rise to critical points of the quadratic loss function for generic regression data. It also provides a rough upper bound for the cardinality of critical points for single-hidden layer neural networks. These results are theoretical in nature.
Theory.   Explanation: The paper discusses two theoretical sources, constructive modeling and adaptive modeling, and how they are used to develop a new computational model, ToRQUE. The paper does not discuss any specific AI sub-category such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Neural Networks.   Explanation: The paper discusses the design of neural networks for adaptive control and investigates techniques for their solution within the framework of neurocontrol. The systematic design method developed in the paper is exemplified for the development of an adaptive force controller for a robot manipulator using neural networks. While other sub-categories of AI may also be relevant to adaptive control, the focus of this paper is on neural networks.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents a learning algorithm that uses sigmoid nodes as binary discriminators to cluster unlabelled data with linear discriminants. The weight adaptation rule is derived via gradient ascent in the objective of maximizing information gained from observing the output of these discriminators.  Probabilistic Methods: The paper uses information theory to derive an objective function for clustering unlabelled data. The objective is to maximize the information gained from observing the output of binary discriminators, which are implemented with sigmoid nodes. The paper also discusses the dynamics of the weight adaptation rule and relates the approach to previous work in the field.
Rule Learning.   Explanation: The paper presents an ASOCS model for massively parallel processing of incrementally defined rule systems in areas such as adaptive logic, robotics, logical inference, and dynamic control. The focus is on Adaptive Algorithm 2 (AA2) and its architecture and learning algorithm, which has significant memory and knowledge maintenance advantages over previous ASOCS models. The ASOCS operates in either a data processing mode or a learning mode, where during learning mode, the ASOCS is given a new rule expressed as a boolean conjunction, and the AA2 learning algorithm incorporates the new rule in a distributed fashion in a short, bounded time. Therefore, the paper belongs to the sub-category of Rule Learning in AI.
Probabilistic Methods.   Explanation: The paper presents a method for analyzing time series using Markov models, which are probabilistic models. The specific type of Markov model used is a mixed memory Markov model, which is a probabilistic method for modeling time series data.
Genetic Algorithms, Rule Learning.   Genetic algorithms are the primary method used in this paper for learning rule-based strategies used by autonomous robots. The paper describes an implementation of a parallel genetic algorithm for this purpose.   Rule learning is also relevant as the focus is on learning rule-based strategies for the robots. The paper discusses the use of genetic algorithms to learn these rules and the evaluation of these rules through simulations.
Neural Networks. This paper introduces a strategy for implementing dynamic neural networks efficiently in parallel hardware, specifically using Backpropagation with two layers of weights. The paper discusses the use of location-independent nodes and dynamic topologies, which are key features of neural networks.
Genetic Algorithms.   Explanation: The paper discusses the application of genetic algorithms to solve a combinatorial optimization problem, specifically the n-job m-machine flowshop problem. The authors modify the canonical coding of the symmetric TSP to create a coding scheme for this problem, and show that genetic operators act intelligently on this coding scheme. They also implement an asynchronous parallel genetic algorithm on a computer network and discuss computational results. Therefore, the paper primarily belongs to the sub-category of Genetic Algorithms in AI.
Neural Networks.   Explanation: The paper presents a VLSI implementation of the Priority Adaptive Self-Organizing Concurrent System (PASOCS) learning model, which is a type of connectionist model that falls under the category of neural networks. The paper discusses how PASOCS differs from classical neural network structures and its potential applications in areas such as pattern recognition, robotics, logical inference, and dynamic control.
Genetic Algorithms.   Explanation: The paper outlines the application of a genetic algorithm to the dynamic job shop problem arising in production scheduling. The paper describes a genetic algorithm that can handle release times of jobs and uses a preceding simulation method to improve the performance of the algorithm. The job shop is regarded as a nondeterministic optimization problem arising from the occurrence of job releases, and Temporal Decomposition leads to a scheduling control that interweaves both simulation in time and genetic search. Therefore, the paper primarily belongs to the sub-category of Genetic Algorithms.
Neural Networks.   Explanation: The paper focuses on neural network pruning methods and their comparison with pure early stopping. The study presents a new pruning method that adapts the pruning strength during training based on the evolution of weights and loss of generalization. The paper extensively experiments with 14 different problems to compare the performance of different pruning methods and early stopping. Therefore, the paper belongs to the sub-category of AI, i.e., Neural Networks.
Case Based, Reinforcement Learning.   Case-based problem-solving systems are the focus of the paper, which falls under the category of Case Based AI. The paper also discusses a case-based planning system that learns new adaptations, which involves Reinforcement Learning.
Case Based, Reinforcement Learning  Explanation:  - Case Based: The paper focuses on case-based reasoning and the learning mechanisms involved in this process. - Reinforcement Learning: The paper discusses the costs and benefits of different learning processes for different knowledge sources, which is a key aspect of reinforcement learning. The paper also proposes a method for integrating multiple learning methods, which is a common approach in reinforcement learning.
Case Based.   Explanation: The paper focuses on case-based reasoning (CBR) and specifically investigates the use of case-based components within a CBR system. The paper discusses the design considerations and empirical results of a case-based planning system that uses CBR to guide its case adaptation and similarity assessment. The paper also explores the potential of learning to acquire the knowledge needed for CBR systems, which is a key characteristic of case-based AI. The other sub-categories of AI (Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, Rule Learning, Theory) are not directly related to the content of the paper.
This paper belongs to the sub-category of Neural Networks.   Explanation: The paper discusses the use of nested networks for robot control, which involves the use of artificial neural networks to control the behavior of the robot. The authors describe how the nested networks are trained using backpropagation, which is a common technique used in neural network training. Additionally, the paper discusses the use of recurrent neural networks for temporal processing, which is another example of how neural networks are used in this context. Overall, the paper focuses on the use of neural networks for robot control, making it most closely related to this sub-category of AI.
Theory.   Explanation: The paper presents a theoretical approach to solving the problem of discriminating between two massive sets of data using linear support vector machines. The authors develop a linear programming algorithm that creates a succession of small linear programs to separate chunks of the data at a time, and prove that this procedure is monotonic and terminates in a finite number of steps at an exact solution that leads to a globally optimal separating plane for the entire dataset. The paper does not involve any of the other sub-categories of AI listed.
Neural Networks.   Explanation: The paper describes a machine learning approach to text-to-speech that builds upon and extends the initial NETtalk work. The extensions include a different learning algorithm, a wider input "window", error-correcting output coding, a right-to-left scan of the word to be pronounced (with the results of each decision influencing subsequent decisions), and the addition of several useful input features. These changes yielded a system that performs much better than the original NETtalk system. The paper does not mention any other sub-categories of AI.
Theory.   Explanation: This paper presents a theoretical approach to the problem of minimizing misclassified points by a plane in n-dimensional real space. It formulates the problem as a linear program with equilibrium constraints (LPEC) and proposes a Frank-Wolfe-type algorithm for solving the associated penalty problem. The paper does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Case Based, Rule Learning  Explanation:   This paper belongs to the sub-category of Case Based AI because it discusses the storage, retrieval, and replay of planning cases to improve planning performance. It also introduces merge strategies for replaying multiple planning cases.   It also belongs to the sub-category of Rule Learning because it discusses the adaptation and merging of annotated derivations of planning cases, which involves processing the differences between past and new situations and annotated justifications. This process involves learning rules for merging the cases.
Case Based, Rule Learning  Explanation:   - Case Based: The paper discusses the reuse of past plans and their justification structure to inform new planning decisions. This is similar to the idea of case-based reasoning, where past cases are used to inform new problem-solving.  - Rule Learning: The paper discusses the development of algorithms to capture and reuse the rationale of an automated planner during its plan generation. This involves learning rules or patterns from past planning decisions to inform future ones.
Neural Networks.   Explanation: The paper discusses the use of neural networks and specifically focuses on improving their generalization by combining the predictions of multiple separately trained networks. The paper also proposes a method for initializing neural networks using competitive learning. While other sub-categories of AI may be indirectly related to the topic, neural networks are the primary focus and the most relevant sub-category.
Probabilistic Methods.   Explanation: The paper presents two algorithms for inducing structural equation models from data, which assume no latent variables and have a causal interpretation. The parameters of these models may be estimated by linear multiple regression. These models are comparable with PC and IC, which rely on conditional independence. The use of probabilistic methods is evident in the estimation of the parameters of the models through linear multiple regression.
Neural Networks, Theory.   Neural Networks: The paper discusses the refinement of knowledge-based neural networks, which are a type of artificial neural network. The authors propose an anytime approach to refining the topology of these networks, which involves adding or removing neurons and connections to improve their performance.   Theory: The paper presents a theoretical framework for refining neural network topologies, based on the idea of anytime algorithms. The authors argue that this approach can be used to improve the efficiency and effectiveness of knowledge-based neural networks, and they provide experimental results to support their claims. The paper also discusses the broader implications of their work for the field of connectionist theory refinement.
Neural Networks.   Explanation: The paper discusses neural network based approximation methods and proposes a multi-resolution hierarchical method using self-organising maps (SOM's) to find an optimal partitioning of the input space. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Neural Networks.   Explanation: The paper describes a new approach for incremental training of a feedforward network with a single hidden layer. The approach is based on the use of orthogonal basis functions to describe the output weights and treating the hidden nodes as the orthogonal representation of the network in the output weights domain. The paper also discusses a special relationship orthogonal backpropagation (OBP) rule for training the hidden nodes. All of these concepts are related to neural networks, making it the most relevant sub-category of AI for this paper.
Case Based, Rule Learning, Theory.   Case Based: The paper discusses the problem of choosing the most appropriate machine learning tool for a particular task, which is a problem that can be addressed using case-based reasoning.   Rule Learning: The paper discusses the discovery of rules that match applications to models based on various criteria, including predictive accuracy.   Theory: The paper presents a number of criteria beyond predictive accuracy that could be considered when learning about model selection, which is a theoretical discussion.
Neural Networks. This paper belongs to the sub-category of Neural Networks. The paper discusses the use of a recurrent neural network as an associative memory for invariant object recognition. It introduces the concept of object representation by continuous attractors and learning attractors by pattern completion. The paper also discusses the limitations of naive methods for learning attractors and proposes a superior method based on pattern completion. Overall, the paper focuses on the use of neural networks for learning continuous attractors.
Genetic Algorithms.   Explanation: The paper presents an experimental investigation on solving graph coloring problems with Evolutionary Algorithms (EA), specifically an asexual EA using order-based representation and an adaptation mechanism that periodically changes the fitness function during the evolution. This is a clear indication of the use of Genetic Algorithms, which is a sub-category of AI that uses evolutionary principles to solve optimization problems. The paper compares the performance of the adaptive EA to a traditional graph coloring technique DSatur and the Grouping GA, further emphasizing the use of evolutionary algorithms in the study.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper introduces simple neuron models for independent component analysis. It discusses the use of a two-unit system and a system of several neurons with linear negative feedback to estimate independent components.  Probabilistic Methods: The paper discusses the estimation of independent components from sphered data and non-sphered (raw) mixtures. It also mentions the estimation of independent components with positive and negative kurtosis, which is a probabilistic measure of the shape of a distribution. The convergence of the learning rules is proven without any unnecessary hypotheses on the distributions of the independent components.
Probabilistic Methods, Case Based.   Probabilistic Methods: The paper describes the use of evidence grids, which are a probabilistic description of occupancy, to represent distinct places.   Case Based: The paper describes the learning mechanism as being similar to that in case-based systems, involving the simple storage of inferred evidence grids. Place recognition relies on case-based classification, augmented by a registration process to correct for translations.
Theory  Explanation: The paper proposes a theoretical model for constructive induction and discusses the distinction between constructive and non-constructive methods. While the paper mentions supervised learning, it also argues that constructive induction can be used in an unsupervised regime, but it does not focus on any specific sub-category of AI such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Rule Learning, Theory.   The paper presents an intelligent system that employs techniques from the area of inductive logic programming to assist in the design of a deductive database. Inductive logic programming is a subfield of machine learning that focuses on learning rules from examples. Therefore, the paper belongs to the subcategory of Rule Learning.   Additionally, the paper discusses the theoretical aspects of designing a deductive database, such as deciding whether a predicate should be defined extensionally or intensionally. Therefore, it also belongs to the subcategory of Theory.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses probabilistic conditioning and thresholding as one of the existing formalisms that can be incorporated into the proposed framework.   Theory: The paper presents a unifying framework for uncertain reasoning, which establishes a common basis for characterizing and evaluating different formalisms. The framework is based on an ordered partition of possible worlds called partition sequences, which is a theoretical concept.
Neural Networks, Reinforcement Learning  This paper belongs to the sub-category of Neural Networks as it proposes a method for signal separation using a nonlinear Hebbian learning algorithm, which is a type of artificial neural network. The paper also discusses the use of backpropagation, which is a common neural network training algorithm.  Additionally, the paper also belongs to the sub-category of Reinforcement Learning as it discusses the use of a reward signal to guide the learning process. The authors propose a method for using a reward signal to improve the separation of signals in the presence of noise.
Neural Networks.   Explanation: The paper investigates a technique for creating sparsely connected feed-forward neural networks and presents initial results based on tests on a specific problem. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods.   Explanation: The paper proposes a method for calculating the posterior probability of a nondecomposable graphical Gaussian model using Bayesian inference, which is a probabilistic method. The paper also mentions sampling from Wishart distributions, which is a common probabilistic method used in Bayesian inference.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as it discusses the performance of different temporal difference (TD) learning methods in a Markov chain with reinforcement. The paper also presents theoretical analysis of the performance of TD(0) and TD(1) in terms of their approximation errors for the value function.
Reinforcement Learning, Case Based  Reinforcement Learning is the most related sub-category of AI in this paper. The paper discusses how lazy learning methods, specifically locally weighted learning, can be used for autonomous adaptive control of complex systems. This is a form of reinforcement learning, where the system learns from its experiences and adjusts its behavior accordingly.  Case Based is another sub-category of AI that applies to this paper. The paper discusses how the system can remember all previous experiences and use them to inform future decisions. This is a form of case-based reasoning, where the system uses past cases to solve new problems.
Genetic Algorithms.   Explanation: The paper discusses the use of genetic programming, which is a variant of genetic algorithms, to evolve functional relationships or computer programs represented as trees. The paper specifically focuses on improving the performance and readability of solutions in genetic programming. While the paper does not explicitly mention other sub-categories of AI, it is clear that genetic algorithms are the primary focus.
Probabilistic Methods.   Explanation: The paper discusses the use of mixture modeling, which is a probabilistic method, to explore structure-activity relationships in drug design. The authors build structured mixture models that mix linear regression models with respect to site-binding selection mechanisms, and they discuss problems and pitfalls in modeling and analysis. They also describe the use of hierarchical random effects components to capture heterogeneities in both the site binding mechanisms and the levels of effectiveness of compounds once bound.
Genetic Algorithms, Neural Networks.   Genetic Algorithms are present in the text as the paper discusses the use of evolutionary algorithms to evolve visually guided robots. The paper describes how a population of robots with different neural network controllers is evolved through a genetic algorithm to optimize their performance in a visually guided task.   Neural Networks are also present in the text as the paper describes how the robots' controllers are implemented as neural networks. The paper discusses how the neural networks are trained through the genetic algorithm to improve their performance in the visually guided task.
Theory  Explanation: The paper provides a theoretical analysis of the generalization error of cross validation, using measures of the difficulty of the problem and giving a rigorous bound on the error. There is no mention or application of any specific sub-category of AI such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Theory.   Explanation: This paper focuses on the theoretical comparison of three model selection methods and introduces a general class of model selection methods. It does not involve the implementation or application of any specific AI sub-category such as neural networks or reinforcement learning.
Rule Learning, Theory.   The paper discusses the generalization of clauses, which is a key operation in rule learning. It introduces a new form of implication, called T-implication, which is a theoretical concept. The paper also discusses the limitations of existing techniques for generalization under -subsumption, which is another aspect of rule learning.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper deals with the distribution functions of order statistics, which is a probabilistic concept. The paper derives recurrence relationships among these distribution functions, which is a probabilistic method.   Theory: The paper extends known theory on the distribution functions of order statistics and provides computationally practicable algorithms. The paper derives recurrence relationships among these distribution functions, which is a theoretical approach.
Probabilistic Methods.   Explanation: The paper argues that Bayesian probability theory is a general method for machine learning, which is a probabilistic approach to learning. The paper demonstrates the capabilities of the theory in two typical types of machine learning: incremental concept learning and unsupervised data classification, both of which involve probabilistic reasoning. The title of the paper also explicitly mentions Bayesian probability theory, which is a probabilistic method.
Probabilistic Methods.   Explanation: The paper discusses the use of Bayesian models for non-linear autoregressions, which is a probabilistic approach to modeling time series data. The authors use Bayesian inference to estimate the parameters of the non-linear autoregressive models, and they also discuss the use of Markov Chain Monte Carlo (MCMC) methods for posterior inference. Overall, the paper focuses on probabilistic methods for modeling time series data, making it most closely related to the sub-category of Probabilistic Methods within AI.
Rule Learning, Theory.   Rule Learning is present in the text as the paper discusses the use of ECOCs, which represent classes with a set of output bits, where each bit encodes a binary classification task corresponding to a unique partition of the classes. Algorithms that use ECOCs learn the function corresponding to each bit, and combine them to generate class predictions.   Theory is also present in the text as the paper discusses the theoretical benefits of using ECOCs for multiclass classification tasks, specifically in reducing both variance and bias errors when the errors made at the output bits are not correlated. The paper also presents a theoretical approach to decorrelating the output bit predictions of local learners through feature selection.
Genetic Algorithms, Collective Intelligence  Explanation:  - Genetic Algorithms: The paper discusses the use of genetic programming (GP) as a search engine for the collective adaptation method. GP is a type of genetic algorithm that uses natural selection and genetic recombination to evolve solutions to a problem. - Collective Intelligence: The paper describes the integration of distributed search with collective memory to form a collective adaptation search method. This approach leverages the collective intelligence of a group of agents to improve search performance. The paper also compares the performance of a collective memory search using a random search engine to a GP-based search engine, highlighting the importance of collective intelligence in complex search problems.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of hierarchical priors and mixture models in regression and density estimation. These are probabilistic methods that allow for uncertainty and variability in the data. The paper also discusses Bayesian inference, which is a probabilistic approach to statistical inference.  Theory: The paper presents theoretical concepts and models, such as hierarchical priors and mixture models, and discusses their properties and applications. The paper also provides mathematical derivations and proofs to support the theoretical concepts presented.
Genetic Algorithms.   Explanation: The paper is solely focused on explaining the concept and implementation of Genetic Algorithms, which is a sub-category of AI. The author provides a detailed tutorial on how to use Genetic Algorithms to solve optimization problems. The paper does not discuss any other sub-category of AI.
Probabilistic Methods, Case Based  Explanation:  - Probabilistic Methods: The paper proposes a novel approach to similarity assessment, which is a key component of many probabilistic methods used in information retrieval. - Case Based: The paper discusses the importance of context in similarity assessment, which is a key concept in case-based reasoning. The relevance measures defined at query time can be seen as a way to adapt the retrieval process to the specific context of each query.
Reinforcement Learning, Case Based.   Reinforcement learning is present in the paper as the learning module combines case-based reasoning and reinforcement learning to continuously tune the navigation system through experience. The reinforcement learning component refines the content of the cases based on the current experience.   Case-based reasoning is also present in the paper as the case-based reasoning component perceives and characterizes the system's environment, retrieves an appropriate case, and uses the recommendations of the case to tune the parameters of the reactive control system. The learning components perform on-line adaptation, resulting in improved performance as the reactive control system tunes itself to the environment, as well as on-line case learning, resulting in an improved library of cases that capture environmental regularities necessary to perform on-line adaptation.
Reinforcement Learning, Neural Networks  Explanation:  - Reinforcement Learning: The paper presents a model for on-site learning that learns by querying "hard" patterns while classifying "easy" ones. This model is related to query-based filtering methods, but takes into account that filtering through the data has a cost. The paper also introduces and analyzes a few simple policies for a simple problem (1D high low game). These characteristics are typical of reinforcement learning, where an agent learns to make decisions by interacting with an environment and receiving feedback in the form of rewards or penalties. - Neural Networks: The paper mentions using a backpropagation network and a nearest neighbor classifier for a real-world OCR task. Backpropagation is a common algorithm for training neural networks, which are a type of machine learning model inspired by the structure and function of the human brain. The paper also suggests using the Query-by-Committee algorithm as a good approximator of the model space for real-world domains. This algorithm is based on the idea of using an ensemble of neural networks to make predictions and measure their disagreement, which can be used as a measure of uncertainty or confidence.
Genetic Algorithms.   Explanation: The paper discusses the use of genetic programming, which is a subfield of genetic algorithms. The paper specifically explores the effects of using a memory-based program response technique and the impact of introns on the search performance of genetic programming.
Rule Learning, Theory.   Rule Learning is present in the text as the paper discusses the accuracy of decision trees produced by Quinlan's C4.5 algorithm and compares it to a simple classification rule.   Theory is present in the text as the paper discusses the implications of Holte's study and questions the future of top-down induction of decision trees. The paper also discusses the representativeness of the databases used by Holte and compares the optimal accuracies of multilevel and one-level decision trees.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The approach described in the paper involves using a set of examples to automatically produce a grapheme-to-phoneme conversion system for a language. This system uses the rules implicit in the training data to generate phonetic transcriptions. The use of training data and rules implies a probabilistic approach to the problem.  Rule Learning: The system described in the paper is based on learning the rules for grapheme-to-phoneme conversion from training data. The system takes as input the spelling of words and produces as output the phonetic transcription according to the learned rules. This approach is based on rule learning.
Probabilistic Methods.   Explanation: The paper discusses the use of kernel density estimation, which is a probabilistic method, to optimize entropy and estimate the density of a signal. The paper also discusses the use of a prior over the space of possible density functions, which is a common approach in probabilistic modeling. There is no mention of any other sub-category of AI in the text.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper discusses the use of optimal sparse locations and scales for function approximation, which involves probabilistic methods such as Gaussian processes and Bayesian inference.  Theory: The paper presents a new general representation for a function as a linear combination of local correlation kernels and characterizes its relation to various concepts such as PCA, regularization, sparsity principles, and Support Vector Machines. This involves theoretical analysis and mathematical derivations.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper deals with a set of probabilistic experiments and their success probabilities.  Theory: The paper studies the complexity of "learning" an approximately optimal search strategy in the fully general model. It provides a bound on the number of trials required to find a good search strategy.
Neural Networks.   Explanation: The paper discusses the Self-Organizing Map (SOM), which is a type of neural network used for unsupervised learning. The paper presents a new analytical method to derive conditions for the emergence of structure in SOMs, which is particularly suited for the high-dimensional variant of SOMs. The paper also discusses a SOM-based model for the development of orientation maps. Therefore, the paper belongs to the sub-category of Neural Networks in AI.
This paper belongs to the sub-category of AI known as Neural Networks.   Explanation: The paper compares neural classifiers with statistical classifiers, and discusses the theory and practice of using neural networks for classification tasks. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Reinforcement Learning, Probabilistic Methods, Theory.  Reinforcement Learning is the primary sub-category of AI that this paper belongs to. The paper studies the process of multi-agent reinforcement learning in the context of load balancing in a distributed system. The authors define a precise framework to study adaptive load balancing, which is stochastic in nature and relies on purely local information available to individual agents. They investigate the interplay between basic adaptive behavior parameters and their effect on system efficiency, explore the properties of adaptive load balancing in heterogeneous populations, and address the issue of exploration vs. exploitation in that context.  Probabilistic Methods are also present in the paper, as the authors mention the stochastic nature of the load balancing process and the fact that agents have access to only local information.  Finally, the paper also falls under the sub-category of Theory, as it presents theoretical results on the properties of adaptive load balancing in a distributed system without central coordination or explicit communication.
Probabilistic Methods.   Explanation: The paper introduces a new algorithm called "bits-back coding" that makes stochastic source codes efficient. It uses the Boltzmann distribution to choose codewords based on their lengths, which is a probabilistic method. The paper also presents a binary Bayesian network model that assigns exponentially many codewords to each symbol, which is another example of probabilistic modeling.
Reinforcement Learning, Probabilistic Methods  Explanation:   Reinforcement Learning is present in the text as the learning program developed in the paper learns to identify and exploit the weaknesses of a particular opponent by repeatedly playing it over several games. This is a classic example of reinforcement learning where the agent learns from its own experience.  Probabilistic Methods are also present in the text as the paper proposes a scheme for learning opponent action probabilities and a utility maximization framework that exploits this learned opponent model. This involves using probability distributions to model the opponent's behavior and using these models to make decisions that maximize the expected utility.
Probabilistic Methods.   Explanation: The paper focuses on discovering causal structure from data using an unsupervised learning algorithm derived from the Expectation-Maximization (EM) framework, which is a probabilistic method commonly used in machine learning. The paper also proposes two alternative methods for computing the E-step, which are Gibbs sampling and mean-field approximation, both of which are probabilistic methods.
Neural Networks.   Explanation: The paper proposes a method for blind identification and source separation using multi-layer neural networks. The nonlinear transformation used in the method is also described as a distortion, which is a common term used in neural network literature. The paper also discusses the development of new on-line un-supervised adaptive learning rules for the neural network implementation. Therefore, this paper belongs to the sub-category of AI known as Neural Networks.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper discusses the performance of source separation algorithms, which typically involve probabilistic models for the sources and mixtures. The paper also mentions the i.i.d. case, which stands for independent and identically distributed, a common assumption in probabilistic modeling.  Theory: The paper presents a lower bound on the performance of source separation algorithms, which is a theoretical result independent of any specific algorithm. The paper also discusses the invariance property of some algorithms, which is a theoretical property that can be used to predict their performance.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes a multi-layer neural network architecture with local on-line learning rules for blind separation of sources. The motivation for using a multi-layer network is to improve the performance and robustness of separation, while applying a very simple local learning rule, which is biologically plausible.   Probabilistic Methods: The paper uses a probabilistic approach for blind separation of highly correlated human faces from a mixture of them, with additive noise and under an unknown number of sources. The proposed neural network architecture enables the extraction of source signals sequentially one after the other, starting from the strongest signal and finishing with the weakest one.
Reinforcement Learning, Probabilistic Methods  Explanation:  The paper belongs to the sub-category of Reinforcement Learning as it focuses on learning to solve Markovian Decision Processes (MDPs) using reinforcement learning algorithms. The paper also belongs to the sub-category of Probabilistic Methods as MDPs involve probabilistic transitions between states and the paper discusses various probabilistic models for MDPs.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses online learning algorithms that predict by combining the predictions of several subordinate prediction algorithms, which can be seen as a probabilistic approach to prediction.   Theory: The paper presents a method for transforming algorithms that assume all experts are always awake to algorithms that do not require this assumption, and derives corresponding loss bounds. This can be seen as a theoretical contribution to the field of online learning algorithms.
Probabilistic Methods.   Explanation: The paper proposes the use of a naive Bayesian classifier within the ILP-R first order learner to take into account the probabilistic aspects of hypotheses when classifying unseen examples. The paper also discusses the use of a RELIEF based heuristic to detect strong dependencies within the literal space. These are both examples of probabilistic methods in AI.
Theory. The paper focuses on extending the capabilities of iterated linear programming for dealing with problems encountered in dynamic nonsmooth process simulation. The paper presents a refined LP method with a new descent strategy, a method for the treatment of discontinuities occurring in dynamic simulation problems, and a new formulation to solve multiphase problems. The paper does not mention any other sub-categories of AI such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Reinforcement Learning, Multi-Agent Learning  Explanation:  The paper discusses a general method for incremental self-improvement and multi-agent learning, which involves the use of reinforcement learning techniques. The authors propose a framework for multi-agent learning that incorporates reinforcement learning algorithms, and they demonstrate the effectiveness of their approach through experiments on a variety of tasks. Additionally, the paper discusses the use of multi-agent learning in the context of self-improvement, where agents learn from their own experiences and from the experiences of other agents in the system. Overall, the paper is primarily focused on reinforcement learning and multi-agent learning, with some discussion of related topics such as self-improvement.
Genetic Algorithms.   Explanation: The paper discusses the application of genetic algorithms to optimization problems and proposes a new model of auto-adaptive behavior for individuals in a population. The rule set for controlling changes in social state is implemented as a massively-parallel genetic algorithm. The computational experiments also compare the results of the new approach to an ordinary genetic algorithm. There is no mention of any other sub-category of AI in the text.
Neural Networks - This paper is primarily focused on providing a collection of problems for neural network learning and defining rules for benchmarking neural network algorithms. The datasets provided are in a format suitable for neural network training and the purpose of the collection is to facilitate the evaluation and comparison of neural network algorithms.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper presents NeuroChess, a program that learns chess board evaluation functions represented by artificial neural networks.   Reinforcement Learning: The program integrates temporal differencing, a variant of explanation-based learning, and inductive neural network learning to learn from the final outcome of games. These techniques are commonly used in reinforcement learning. The paper also discusses the strengths and weaknesses of this approach, which is a common topic in reinforcement learning research.
Neural Networks.   Explanation: The paper discusses the integration of a prototype-based neural network with a case-based reasoning system to improve the retrieval phase. The paper focuses on constructing a simple and efficient indexing system structure using an incremental prototype-based neural network. While other sub-categories of AI may be indirectly related to the topic, such as rule learning or probabilistic methods, the main focus of the paper is on the use of neural networks.
Theory.   Explanation: The paper presents a theoretical approach to the problem of PAC-learning the concept class of one-dimensional geometric patterns using the Hausdorff metric. The focus is on developing a polynomial-time algorithm for this problem and presenting experimental results to evaluate its performance. There is no mention of any other sub-categories of AI such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Theory  Explanation: The paper discusses a theoretical concept of measure functions for model selection and evaluation, and how it can be used to state a learning problem as a computational problem. The paper does not discuss any specific AI sub-category such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Neural Networks. This paper belongs to the sub-category of Neural Networks. The paper discusses the use of recurrent attractor networks for modeling psychological phenomena and investigates the conditions under which articulated attractors arise in these networks. The paper also explores the use of backpropagation for training these networks.
Rule Learning, Case Based.   The paper discusses decision trees, which are a type of rule learning algorithm. The focus of the paper is on simplifying decision trees, which is a sub-topic within rule learning. Additionally, the paper briefly discusses the application of decision trees to case retrieval in case-based reasoning systems, which falls under the category of case-based AI.
Probabilistic Methods.   Explanation: The paper proposes a method for monitoring Markov chain samplers, which are probabilistic methods used for generating samples from complex distributions. The paper discusses the use of a 1-dimensional summary statistic and the cusum path plot to diagnose convergence and compare different samplers. The paper does not discuss any other sub-categories of AI such as case-based reasoning, genetic algorithms, neural networks, reinforcement learning, or rule learning.
Probabilistic Methods.   Explanation: The paper discusses the use of the Gibbs sampler, which is a probabilistic method for sampling from the Gibbs distribution. The paper also mentions the Ising model, which is a probabilistic model used in statistical mechanics and is closely related to the Gibbs distribution. The paper's focus on Bayesian image reconstruction also involves probabilistic modeling and inference.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes a self-organizing map (SOM) model for orientation map development in the visual cortex. SOM is a type of artificial neural network that can learn to represent high-dimensional data in a lower-dimensional space. The authors use SOM to simulate the development of orientation maps in the visual cortex, which is a well-known phenomenon in neuroscience.   Probabilistic Methods: The authors introduce a modification to the SOM model that breaks the rotational symmetry of the input space. They do this by introducing a probabilistic term in the update rule of the SOM, which biases the learning towards certain orientations. The authors show that this modification leads to the emergence of pinwheel-like structures in the orientation map, which is also observed in real neural tissue. The probabilistic term is derived from a statistical model of the input distribution, which is assumed to be Gaussian.
Probabilistic Methods.   Explanation: The paper presents methods for coupling hidden Markov models (hmms) to model systems of multiple interacting processes. The resulting models have multiple state variables that are temporally coupled via matrices of conditional probabilities. The paper also introduces a deterministic O(T (CN ) 2 ) approximation for maximum a posterior (MAP) state estimation which enables fast classification and parameter estimation via expectation maximization. The paper compares these algorithms on synthetic and real data, including interpretation of video. All of these methods and experiments involve probabilistic modeling and inference.
Probabilistic Methods, Theory  Probabilistic Methods: The paper discusses Markov Chain Monte Carlo (MCMC) convergence diagnostics, which is a probabilistic method used to estimate the posterior distribution of a model. The paper also discusses the possible biases that can be induced by MCMC convergence diagnostics.  Theory: The paper discusses the theoretical aspects of MCMC convergence diagnostics and how it can be used to estimate the posterior distribution of a model. The paper also discusses the possible biases that can be induced by MCMC convergence diagnostics, which is a theoretical concern.
Rule Learning, Reinforcement Learning  Explanation:   The paper belongs to the sub-category of Rule Learning because it focuses on learning logical exceptions in chess. The authors propose a rule-based approach to identify and learn exceptions to the standard chess rules. They use a set of predefined rules to identify exceptions and then use a decision tree algorithm to learn new rules based on the identified exceptions.  The paper also belongs to the sub-category of Reinforcement Learning because the authors use a reinforcement learning algorithm to evaluate the learned rules. They use a chess engine to simulate games and evaluate the performance of the learned rules. The reinforcement learning algorithm is used to adjust the weights of the rules based on the performance of the system.
Probabilistic Methods.   Explanation: The paper describes the use of Bayesian analysis, which is a probabilistic method, for analyzing agricultural field experiments. The authors discuss the need for spatial representations of unobserved fertility patterns and the use of Markov chain Monte Carlo methods for analyzing complex formulations. The paper includes three analyses of variety trials for yield and one example involving binary data, all of which are analyzed using Bayesian methods.
Probabilistic Methods.   Explanation: The paper discusses CABeN, which is a collection of algorithms for belief networks. Belief networks are a type of probabilistic graphical model used in artificial intelligence to represent uncertain knowledge and make probabilistic inferences. The paper focuses on the algorithms used to manipulate and reason with belief networks, which are probabilistic methods.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the discretization of continuous state space Markov chains using renewal times and subsampling. It also uses a divergence criterion derived from Kemeny and Snell (1960) to assess convergence, which is a probabilistic method.   Theory: The paper discusses general convergence properties on finite state spaces and uses Birkhoff's pointwise ergodic theorem for stopping rules. These are theoretical concepts used in the analysis of Markov chains.
Probabilistic Methods.   Explanation: The paper discusses the use of Markov chain Monte Carlo (MCMC) samplers as a probabilistic method for Bayesian computation. The proposed solution to the problem of slow mixing in high dimensional and strongly correlated density functions involves augmenting the state space with multiple chains in parallel and using a genetic-style crossover operator to update individual chains. The methodology is also extended to deal with variable selection and model averaging in high dimensions.
Probabilistic Methods.   Explanation: The paper describes the use of variational approximation methods for efficient probabilistic reasoning in the QMR-DT database, which is a large-scale belief network based on statistical and expert knowledge in internal medicine. The focus is on diagnostic inference, which involves probabilistic reasoning to determine the most likely diagnosis given a set of symptoms and other relevant information. The paper compares the accuracy of the variational approximation methods to stochastic sampling methods, which are also probabilistic in nature. Therefore, the paper belongs to the sub-category of Probabilistic Methods in AI.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper proposes a self-optimizing approach driven by an evolutionary strategy, which is a type of genetic algorithm. The algorithm co-evolves the two determinant parameters of the network's layout: the number of centroids and the centroids' positions.   Neural Networks: The paper discusses the optimization of RBF networks, which are a type of neural network. The algorithm proposed in the paper uses a computationally efficient approximation of RBF networks to optimize the K-means clustering process. The paper also discusses the effects of a neural network's topology on its performance.
Genetic Algorithms, Rule Learning.   Genetic algorithms are used in this paper to search for the best subset of features for recognizing complex visual concepts. The paper describes how a genetic algorithm is used to search the space of all possible subsets of a large set of candidate discrimination features.   Rule learning is also present in this paper, as the C4.5 decision-tree learning algorithm is used to evaluate candidate feature subsets and produce a decision tree based on the given features using a limited amount of training data. The resulting decision tree is then used to classify unseen testing data, and the classification performance is used as the fitness of the underlying feature subset.
Case Based, Probabilistic Methods  Explanation:  - Case Based: The paper describes INCA, an intelligent assistant that retrieves a case from a case library to seed the initial schedule for crisis response. This is an example of a case-based approach to AI. - Probabilistic Methods: The paper mentions using probability to estimate the likelihood of certain events occurring during crisis response. For example, "INCA uses probability estimates to reason about the likelihood of different events occurring during the response."
Rule Learning, Genetic Algorithms - The paper describes a system called SAMUEL that uses competition-based heuristics such as genetic algorithms to develop high performance reactive rules for sequential decision tasks. The method for deriving explanations involves explaining how the reactive rules trigger a sequence of actions to satisfy inferred subgoals. This is an example of rule learning, where the system learns rules based on the task environment and payoff function. The use of genetic algorithms is an example of a competition-based heuristic that is used to develop these rules.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper presents a relational learning algorithm called grdt that searches in a hypothesis space restricted by rule schemata defined in terms of grammars.   Probabilistic Methods are also present in the text as the paper discusses using machine learning to enhance the link between low-level representations of sensing and action and high-level representation of planning. This involves combining perception and action at every level, which can be seen as a probabilistic approach to decision-making.
Probabilistic Methods.   Explanation: The paper discusses various methods proposed in the MCMC (Markov Chain Monte Carlo) literature for assessing convergence of probabilistic methods. The paper establishes a common notation and compares the interpretability and applicability of different methods. The focus is on probabilistic methods for MCMC algorithms.
Probabilistic Methods, Neural Networks  The paper belongs to the sub-category of Probabilistic Methods as it discusses the use of Dynamic Probabilistic Networks (DPNs) for compositional modeling. DPNs are a type of probabilistic graphical model that can represent complex dependencies between variables. The paper also mentions the use of Bayesian inference, which is a probabilistic method for updating beliefs based on new evidence.  The paper also belongs to the sub-category of Neural Networks as it discusses the use of DPNs, which are a type of neural network. DPNs consist of interconnected nodes that represent variables and their dependencies, and the paper describes how these networks can be trained using backpropagation. Additionally, the paper mentions the use of deep learning techniques, which are a type of neural network that can learn hierarchical representations of data.
This paper belongs to the sub-category of AI called Case Based.   Explanation: The paper proposes a new methodology for time series recognition that is based on memory, where past cases are stored and used to recognize new cases. This is the fundamental concept of Case Based reasoning, which is a subfield of AI that involves solving new problems by adapting solutions to similar past problems. The paper also mentions the use of similarity measures to compare new cases with past cases, which is another key aspect of Case Based reasoning.
Neural Networks.   Explanation: The paper describes the use of a feedforward neural network to approximate the desired camera-joint mapping for tracking a moving object. The controllers described in the paper are based on this neural network. There is no mention of any other sub-category of AI in the text.
The paper belongs to multiple sub-categories of AI, specifically: Neural Networks, Probabilistic Methods, Reinforcement Learning, Rule Learning.   Neural Networks: The paper discusses the use of neural networks in various applications, such as image recognition and natural language processing. It also mentions the use of deep learning techniques, which are a type of neural network.  Probabilistic Methods: The paper discusses the use of probabilistic models, such as Bayesian networks, in machine learning. It also mentions the use of probabilistic programming languages, which allow for the creation of probabilistic models.  Reinforcement Learning: The paper discusses the use of reinforcement learning in various applications, such as game playing and robotics. It also mentions the use of deep reinforcement learning, which is a type of reinforcement learning that uses deep neural networks.  Rule Learning: The paper discusses the use of rule-based systems in machine learning, such as decision trees and association rule mining. It also mentions the use of rule induction algorithms, which are used to automatically generate rules from data.
Probabilistic Methods.   Explanation: The paper discusses the use of covariance information to build predictive causal models, which is a key aspect of probabilistic methods in AI. The fbd algorithm mentioned in the paper combines covariance information with a heuristic to build these models. The paper also compares the performance of fbd with Pearl and Verma's ic algorithm, which are both probabilistic methods for causal inference.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper employs genetic algorithms to search the space of decision policies.   Reinforcement Learning: The paper addresses the problem of learning decision rules for sequential tasks, which is a key problem in reinforcement learning. The learning method relies on the notion of competition, which is a common approach in reinforcement learning. The paper also discusses issues arising from differences between the simulation model on which learning occurs and the target environment on which the decision rules are ultimately tested, which is a common challenge in reinforcement learning.
Rule Learning, Theory.   Rule Learning is present in the text as the paper discusses a Horn clause relational learning algorithm, M-FOCL, which is a type of rule learning algorithm. The paper also discusses biasing the learning method, which is a common technique in rule learning.   Theory is present in the text as the paper discusses the inductive learning problem and how inductive learning algorithms bias their learning method. The paper also presents a transference bias and discusses how it can be utilized to learn multiple concepts. The paper provides preliminary empirical evaluation to show the effects of biasing previous information on noise-free and noisy data, which is a theoretical approach to evaluating the effectiveness of the algorithm.
Neural Networks, Probabilistic Methods, Theory.  Neural Networks: The paper is primarily concerned with architecture selection issues for feed-forward neural networks.  Probabilistic Methods: The paper discusses selecting and combining models within the framework of statistical theory for model choice.  Theory: The paper discusses the important problem of choosing the architecture of a neural network within the context of statistical theory for model choice.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper proposes a model-theoretic definition of causation, which involves probabilistic reasoning and statistical semantics.  Theory: The paper provides a complete characterization of the conditions under which a distinction between genuine causal influences and spurious covariations is possible. It also presents a proof-theoretical procedure for inductive causation, which involves theoretical reasoning and logical analysis.
Theory.   Explanation: This paper deals with the theoretical analysis of the all-to-all broadcast on the CNS-1 network. It presents a lower bound for the run time and an algorithm meeting this bound, and analyzes the performance of alternative interface designs based on a run time model of the network. There is no mention of any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods.   Explanation: The paper proposes a framework for representing descriptive, context-sensitive knowledge that integrates categorical and uncertain knowledge in a network formalism. This suggests the use of probabilistic methods, which deal with uncertainty and variation in data. The paper does not mention any other sub-categories of AI.
Neural Networks.   Explanation: The paper discusses methods for estimating the standard error of predicted values from a multi-layer perceptron, which is a type of neural network. The paper does not discuss any other sub-categories of AI.
Probabilistic Methods.   Explanation: The paper discusses the use of Bayesian inference and mixture models to model amplitude fluctuations of electrical potentials in the nervous system. The focus is on modeling the noise terms as a mixture of normals using a Dirichlet process mixture, which is a probabilistic method. The paper does not discuss any other sub-categories of AI.
Neural Networks.   Explanation: The paper focuses on using feedforward neural networks to approximate the desired camera-joint mapping for the robot manipulator. Additionally, the paper proposes several "predictive" controllers that use time derivatives to predict the next position of a moving object. These controllers also utilize neural networks to make these predictions. Therefore, the paper primarily belongs to the sub-category of Neural Networks in AI.
Neural Networks.   Explanation: The paper discusses the AA1 model of ASOCS, which is a type of neural network. The paper describes how AA1 grows and self-organizes to find features that discriminate between concepts, which is a key characteristic of neural networks. The paper also mentions that convergence to a training set is guaranteed and bounded linearly in time, which is a property of neural networks.
Probabilistic Methods.   Explanation: The paper discusses the maximum likelihood approach for source separation, which is a probabilistic method that involves modeling the probability distributions of the sources and the noise. The Expectation-Maximization (EM) algorithm, which is used for maximizing the likelihood, is also a probabilistic method. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods.   Explanation: The paper describes a diagnostic/recovery procedure based on a well-known Taylor series approximation technique that is applicable to any classifier known to be robust, including both neural network and traditional parametric pattern classifiers generated by a supervised learning procedure in which an empirical risk/benefit measure is optimized. The procedure determines a ranked set of probable causes for the degraded health state, which can be used as a prioritized checklist for isolating system anomalies and quantifying corrective action. The use of probabilities and statistical models is central to the approach described in the paper, making it most closely related to the sub-category of Probabilistic Methods within AI.
Case Based, Genetic Algorithms.   Explanation:  - Case Based: The paper is focused on case combination, which is a problem in Case Based Reasoning. The authors previously formalized case combination as a constraint satisfaction problem, and in this paper they propose a method to improve case adaptability using a genetic algorithm.  - Genetic Algorithms: The paper proposes a method to improve case adaptability using a genetic algorithm. The authors introduce a fitness function and perturb a sub-solution to allow subsequent case combination to proceed more efficiently.
Case Based.   Explanation: The paper discusses the use of Case-Based Reasoning (CBR) techniques in combination with Dynamic Constraint Satisfaction Problem (DCSP) formalism. It specifically mentions the similarity between the challenges faced by DCSP and case adaptation, and how CSP and CBR can work together to address these challenges. Therefore, the paper primarily belongs to the sub-category of Case Based AI.
Theory  Explanation: The paper presents a mathematical model of concept learning, called Probably Approximately Correct (PAC) learning, and uses it to analyze the impact of different forms of bias on learning. This falls under the category of AI theory, which involves developing mathematical models and algorithms to understand and improve AI systems. The paper does not discuss any of the other sub-categories listed.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods: The paper discusses the process of calibration, which can be viewed as a form of supervised learning in the presence of prior knowledge. This implies the use of probabilistic methods to learn values for the free parameters.  Rule Learning: The paper describes a new divide-and-conquer approach in which subsets of the parameters are calibrated while others are held constant. This approach succeeds because it is possible to select training examples that exercise only portions of the model. This implies the use of rule learning to optimize the calibration process.
Case Based, Rule Learning  Explanation:  - Case Based: The paper discusses the situation in which a learner's testing set contains close approximations of cases which appear in the training set, which can be considered as "virtual seens". This concept is related to case-based reasoning, which involves solving new problems by adapting solutions from similar past cases.  - Rule Learning: The paper specifically mentions the 1R algorithm and C4.5, which are both rule learning algorithms. The paper also proposes using the 1-NN algorithm to derive a normalizing baseline for generalization statistics, which can be seen as a rule-based approach to normalization.
Case Based, Neural Networks  Explanation:   This paper belongs to the sub-category of Case Based AI because it uses exemplars (previously learned examples) to recognize music structure. The authors state that their approach is "exemplar-based" and they use a database of previously annotated music pieces to train their system.  This paper also belongs to the sub-category of Neural Networks because the authors use a specific type of neural network called a Self-Organizing Map (SOM) to cluster the exemplars and recognize the structure of new music pieces. The authors explain how the SOM works and how they use it in their system.
Case Based, Rule Learning  Explanation:  This paper belongs to the sub-category of Case Based AI because it focuses on conversational case-based reasoning systems and how to improve their performance through automated revision of case libraries. The paper also belongs to the sub-category of Rule Learning because it describes an automated inductive approach for revising case libraries to increase their conformance with design guidelines. This approach involves learning rules from existing case libraries and using them to revise the libraries.
Probabilistic Methods.   Explanation: The paper discusses the use of mixture models and the EM algorithm to estimate parameters and solve the missing data problem in medical and machine diagnosis. These are both probabilistic methods commonly used in AI.
Neural Networks, Theory.   Neural Networks: The paper discusses an example of a neural net with a sigmoid transfer function and a training set of binary vectors. It also mentions the sum of squared errors as a function of weights, which is a common concept in neural network training.   Theory: The paper explores the concept of local minima in neural network training and provides an example of a network with a local minimum that is not a global minimum. It also mentions the possibility of smaller binary examples existing, indicating a theoretical exploration of the topic.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of majority vote classifiers, which are a type of ensemble method that combines the predictions of multiple base classifiers. The combination is done using a probabilistic approach, where each base classifier's prediction is treated as a vote with a certain probability of being correct. The paper also discusses the use of Bayesian methods to estimate the probabilities of the base classifiers' predictions.  Theory: The paper provides a theoretical analysis of majority vote classifiers, including their error bounds and convergence properties. The authors derive theoretical results that show how the performance of the majority vote classifier depends on the performance of the base classifiers and the correlation between their predictions. The paper also discusses the relationship between majority vote classifiers and other ensemble methods, such as bagging and boosting.
Probabilistic Methods.   Explanation: The paper discusses a probabilistic approach to learning a credulous version of a default theory that is optimally accurate. The algorithm presented in the paper uses probability to estimate the unknown distribution of queries and hill-climbing to find a local optimum.
Reinforcement Learning.   Explanation: The paper discusses a specific approach to reinforcement learning called "Multigrid Q-Learning." The authors describe how this method can be used to solve problems in which an agent must learn to make decisions based on feedback from its environment. The paper includes a detailed explanation of the Q-Learning algorithm and how it can be extended to work with multiple grids. Overall, the focus of the paper is on using reinforcement learning to solve complex problems, making it a clear example of this sub-category of AI.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses the inherent compression pressure towards short, elegant and general solutions in a genetic programming system. It also talks about the effects of crossover probability, maximum depth or length of solutions, explicit parsimony, and modularization on the evolution of solutions.  Theory: The paper presents a hypothesis and provides a basis for an analysis of the effects of compression pressure on the evolution of solutions in genetic programming systems. It also suggests ways to overcome the negative implications of compression pressure for successful evolution. Additionally, an empirical investigation is presented to support the hypothesis.
This paper belongs to the sub-category of AI called Genetic Algorithms.   Explanation: The title of the paper explicitly mentions "GA Results," which refers to the results of Genetic Algorithms. Additionally, the abstract mentions "CBR Assisted Explanation" which stands for Case-Based Reasoning, but this is not the main focus of the paper. Therefore, Genetic Algorithms is the most related sub-category of AI to this paper.
Reinforcement Learning, Genetic Algorithms, Rule Learning.   Reinforcement Learning is the main sub-category of AI discussed in the paper, as the XCS classifier system is a reinforcement learning algorithm. The paper discusses how XCS forms complete mappings of the payoff environment and evolves optimal populations using accuracy-based fitness.   Genetic Algorithms are also present in the paper, as XCS uses evolutionary search to evolve its population of classifiers. The paper specifically mentions the use of condensation, a technique in which evolutionary search is suspended by setting the crossover and mutation rates to zero.   Rule Learning is another sub-category of AI present in the paper, as XCS is a rule-based classifier system. The paper discusses how XCS evolves a set of non-overlapping classifiers to accurately map input/action pairs to payoff predictions using the smallest possible set of classifiers.
Rule Learning, Theory  Explanation:  The paper discusses the implementation of a rule induction system called Rise 1.0, which is an example of rule learning in AI. The paper also presents a theoretical comparison of Rise with the CN2 system, indicating that the "conquering without separating" approach used in Rise is more effective in certain domains. Therefore, the paper belongs to the sub-category of Rule Learning and Theory.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper discusses a genetic programming method for optimizing the architecture and connection weights of neural networks. The genotype of each network is represented as a tree and genetic operators are used to adapt the depth and width of the tree.   Neural Networks: The paper focuses on optimizing the architecture and connection weights of multilayer feedforward neural networks. The weights are trained using a next-ascent hillclimbing search. The fitness function proposed in the paper quantifies the principle of Occam's razor, which makes an optimal trade-off between the error fitting ability and the parsimony of the network.
Neural Networks.   Explanation: The paper focuses on the performance of a neural network in categorizing facial expressions and comparing it with human subjects. The paper discusses the experiments conducted using interpolated imagery and how the neural network accurately captures the categorical nature of human responses. The paper also discusses the limitations of the model, which are attributed to the difference between the stimuli used in the experiments. Therefore, the paper primarily belongs to the sub-category of AI, Neural Networks.
Genetic Algorithms.   Explanation: The paper discusses a tool for automatic generation of structured models for complex dynamic processes using genetic programming. The tool is based on a block oriented approach with a transparent description of signal paths. The paper also provides a short survey on other techniques for computer-based system identification, but the main focus is on the genetic programming approach used in the SMOG system. Therefore, the paper is most related to the Genetic Algorithms sub-category of AI.
Genetic Algorithms.   Explanation: The paper explicitly mentions the use of a simple genetic algorithm and examines the role of hyperplane ranking during genetic search. The other sub-categories of AI are not mentioned or relevant to the content of the paper.
Genetic Algorithms, Theory.   The paper primarily focuses on the methodology and applications of genetic programming, which is a subfield of genetic algorithms. The authors discuss the theoretical foundations of genetic programming and how it can be parallelized to improve its efficiency. They also provide examples of its applications in various fields such as image processing, data mining, and robotics. Therefore, the paper is most related to genetic algorithms.   Additionally, the paper also discusses the theoretical aspects of genetic programming, such as the role of fitness functions and the selection process. This indicates that the paper also has a strong connection to the theory of AI.
Genetic Algorithms, Theory.   Explanation:  The paper belongs to the sub-category of Genetic Algorithms because it discusses the two genetic operators, crossover and mutation, which are fundamental to genetic algorithms. It also belongs to the sub-category of Theory because it provides a theoretical analysis of the roles of crossover and mutation in genetic algorithms.
Probabilistic Methods.   Explanation: The paper discusses the use of Bayesian networks, which are a probabilistic graphical model, for classification. The Tree Augmented Naive Bayes (TAN) classifier introduced in the paper is based on Bayesian networks and the extension proposed in the paper also uses parametric and semiparametric conditional probabilities. The paper also discusses the advantages of using the modeling language of Bayesian networks to represent both discrete and continuous attributes simultaneously.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper proposes a hierarchically structured representation language that extends dynamic Bayesian networks and object-oriented Bayesian networks to represent complex stochastic systems. The paper also provides a simple inference mechanism for the representation via translation to Bayesian networks.   Theory: The paper discusses the limitations of existing frameworks for representing complex stochastic systems and proposes a new language that supports a natural representation for certain system characteristics that are hard to capture using more traditional frameworks. The paper also suggests ways in which the inference algorithm can exploit the additional structure encoded in the proposed representation.
Neural Networks, Theory.   Neural Networks: The paper discusses the limitations and potential solutions for a specific type of neural network, Recurrent Cascade Correlation, and proposes a constructive learning method for it.  Theory: The paper presents a theoretical analysis of the limitations of Recurrent Cascade Correlation in representing certain types of finite state automata, and proposes a solution based on a simple constructive training method.
Case Based, Theory  Explanation:  - Case Based: The paper discusses indexing of cases, which is a key topic in Memory-Based Reasoning (MBR), a subfield of Case-Based Reasoning (CBR). - Theory: The paper proposes a new weighting method based on a statistical technique called Quantification Method II, and claims that the generated attribute weights are optimal in a certain sense. The paper also mentions that existing methods have no theoretical background.
Neural Networks, Theory.  Explanation:  1. Neural Networks: The paper presents one of the constructions of controllers in terms of a "neural-network type" one-hidden layer architecture.  2. Theory: The paper presents a general result on the stabilization of linear systems using bounded controls, without any specific application or implementation. It discusses the necessary conditions for stabilization and presents two different constructions of controllers. Therefore, it belongs to the sub-category of Theory.
Neural Networks. This paper belongs to the sub-category of Neural Networks in AI. The paper proposes a classification scheme based on the integration of multiple Ensembles of ANNs (Artificial Neural Networks) for a seismic signal classification problem. The ANNs within the Ensembles are aggregated using Bagging, and the Ensembles are integrated non-linearly using a posterior confidence measure based on the agreement within the Ensembles. The paper demonstrates that such integration of a collection of ANN's Ensembles is a robust way for handling high dimensional problems with a complex non-stationary signal space as in the current Seismic Classification problem.
Probabilistic Methods.   This paper belongs to the sub-category of Probabilistic Methods in AI. The authors discuss Bayesian model selection for generalized linear models, which is a probabilistic approach to model selection. They use the GLIB algorithm, which is a Bayesian model selection algorithm based on the Bayesian Information Criterion (BIC). The authors also discuss the use of prior distributions in Bayesian model selection, which is a key aspect of probabilistic methods.
Case Based.   Explanation: The paper advocates for an incremental revision framework for improving schedule quality and incorporating user dynamically changing preferences through Case-Based Reasoning. The implemented system, called CABINS, records situation-dependent tradeoffs and consequences that result from schedule revision to guide schedule improvement. The paper focuses on the use of case-based reasoning to acquire and incorporate user preferences. There is no mention of genetic algorithms, neural networks, probabilistic methods, reinforcement learning, rule learning, or theory in the text.
Probabilistic Methods.   Explanation: The paper discusses different types of qualitative probability, which is a key concept in probabilistic reasoning. The author explores different interpretations of probability, such as the frequency interpretation and the subjective interpretation, and discusses how they can be applied in various contexts. The paper also touches on the concept of Bayesian networks, which are a common tool in probabilistic reasoning. While other sub-categories of AI may be relevant to certain aspects of the paper, such as rule learning or theory, the focus on probability and its various interpretations makes probabilistic methods the most relevant sub-category.
Genetic Algorithms, Rule Learning.   Genetic Algorithms are present in the text through the description of the approach to behavior coordination using the genetic programming (GP) paradigm. The paper applies both conventional GP and steady-state GP to evolve a fuzzy-behavior for sensor-based goal-seeking.   Rule Learning is present in the text through the formulation of rules collectively responsible for necessary levels of intelligence in the behavior hierarchy. The paper describes how a collection of rules can be decomposed and efficiently implemented as a hierarchy of fuzzy-behaviors. Additionally, the paper describes the evolution of fuzzy coordination rules using the genetic programming (GP) paradigm.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents a two-layer network for unsupervised learning of distributions on binary vectors. The learning algorithms presented in the paper are based on gradient ascent and projection pursuit density estimation, which are commonly used in neural network training.  Probabilistic Methods: The paper presents a distribution model for binary vectors, called the influence combination model, which is a probabilistic method for modeling arbitrary distributions of binary vectors. The learning algorithms presented in the paper are based on maximizing the likelihood of the observed data under the influence combination model. The paper also compares the influence combination model with other probabilistic models such as the mixture model and principle component analysis.
Probabilistic Methods, Theory  Probabilistic Methods: This paper discusses the use of probabilistic methods in learning systems, specifically in the context of separating formal bounds from practical performance. The authors mention the use of Bayesian methods and probabilistic graphical models to address this issue.  Theory: The paper also delves into theoretical concepts such as generalization bounds and the bias-variance tradeoff. The authors discuss how these concepts can be used to analyze the performance of learning systems and to separate formal bounds from practical performance.
Probabilistic Methods, Reinforcement Learning, Theory.   Probabilistic Methods: The paper discusses the use of collective action to expedite search in combinatorial optimization problems, which involves probabilistic methods such as ant colony optimization.   Reinforcement Learning: The paper mentions the use of collective memory to improve learning in multi-agent systems, which can involve reinforcement learning techniques.   Theory: The paper presents a theoretical model of collective action and memory in a computational agent society, and examines the ability of the society to distribute task allocation without centralized control.
Theory. The paper discusses theoretical concepts and bounds related to learning boolean functions using a concept class of finite cardinality. It does not involve any practical implementation or application of AI techniques such as neural networks, genetic algorithms, or reinforcement learning.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper applies genetic programming to evolve intelligent agents. Genetic programming is a type of genetic algorithm that evolves computer programs to solve a specific problem.   Reinforcement Learning: The paper discusses the evolution of intelligent agents that build internal representations of their successive actions. This is a characteristic of reinforcement learning, where agents learn from the consequences of their actions to maximize a reward signal.
Genetic Algorithms.   Explanation: The paper specifically mentions the use of genetic algorithms as a modern stochastic optimization method for dealing with the search procedure involved in locating underwater sonar targets. The approach presented in the paper reduces the problem to that of search optimization, which can be dealt with using genetic algorithms. Therefore, genetic algorithms are the most related sub-category of AI to this paper.
Rule Learning, Theory.   The paper introduces a new type of intelligent agent called a constructive induction-based learning agent (CILA) which has the ability to incrementally adapt its knowledge representation space to better fit the given learning task. The agent's ability to autonomously make problem-oriented modifications to the originally given representation space is due to its constructive induction (CI) learning method, which is a type of rule learning. The paper also discusses the architecture for a CI-based learning agent and gives an empirical comparison of a CI and selective induction (SI) for a set of six abstract domains involving DNF-type descriptions, which is a theoretical analysis.
Rule Learning, Genetic Algorithms.   Rule Learning is present in the text as the paper discusses the implementation of classifier systems, which are a type of rule-based machine learning algorithm. The package of subroutines described in the paper is designed to allow for the implementation of classifier systems in arbitrary environments, making it a domain-independent tool for rule learning.  Genetic Algorithms are also present in the text as the paper describes the use of a genetic algorithm to evolve the rules used by the classifier system. The paper discusses the use of a fitness function to evaluate the performance of different rule sets and the use of genetic operators such as crossover and mutation to generate new rule sets.
Neural Networks.   Explanation: The paper proposes a method for decreasing the computational complexity of self-organising maps, which are a type of neural network. The method involves partitioning the neurons into clusters and teaching them on a cluster-basis. The paper also introduces a measure for the amount of order in a self-organising map, which is a characteristic of neural networks.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper describes how hypotheses are abstracted into rule models. Incy, the inductive learner described in the paper, uses rule models for control decisions in the data-driven phase and for model-guided induction.   Probabilistic Methods are also present in the text as the paper discusses how the hypothesis space searched is restricted in some way, either through data-driven or model-based approaches. These approaches can be seen as implicitly or explicitly incorporating probabilistic reasoning into the learning process.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms are present in the text as the learning method employed in the paper relies on the notion of competition and employs genetic algorithms to search the space of decision policies.   Reinforcement Learning is present in the text as the paper addresses the problem of learning decision rules for sequential tasks, specifically learning tactical plans from a simple flight simulator where a plane must avoid a missile. The learning method used in the paper relies on the notion of competition and employs genetic algorithms to search the space of decision policies, which is a form of reinforcement learning.
Genetic Algorithms, Reinforcement Learning  Explanation:  This paper belongs to the sub-category of Genetic Algorithms because it proposes the use of genetic algorithms to improve tactical plans. The paper explains how genetic algorithms can be used to optimize the selection of tactics and the allocation of resources in a military scenario. The paper also discusses the use of fitness functions and crossover and mutation operators in the genetic algorithm.  This paper also belongs to the sub-category of Reinforcement Learning because it discusses the use of reinforcement learning to improve tactical plans. The paper explains how reinforcement learning can be used to learn the optimal policy for a given scenario. The paper also discusses the use of Q-learning and SARSA algorithms in reinforcement learning.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper uses SAMUEL, a learning system based on genetic algorithms, to learn high-performance reactive strategies for navigation and collision avoidance.   Reinforcement Learning: The paper aims to develop robust reactive rules that perform well in a wide variety of situations, which is a key aspect of reinforcement learning. Additionally, the use of SAMUEL to learn these rules can be seen as a form of reinforcement learning, as the system is using feedback from its environment to improve its performance.
Theory.   Explanation: The paper introduces and investigates a mathematically rigorous theory of learning curves based on ideas from statistical mechanics. The focus is on developing bounds that are more reflective of the true behavior of learning curves, including properties such as phase transitions and power law asymptotics. The paper does not discuss the implementation or application of any specific AI techniques such as neural networks or reinforcement learning.
Neural Networks, Theory.  Explanation:  - Neural Networks: The paper discusses the use of a Neural Network Pushdown Automaton (NNPDA) model for learning context-free languages.  - Theory: The paper discusses the theoretical aspects of learning context-free languages, including the computational complexity of the task and the use of a priori knowledge for efficient learning.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper presents connectionist learning procedures for "sigmoid" and "noisy-OR" varieties of stochastic feedforward network, which are in the same class as the "belief networks" used in expert systems. These networks represent a probability distribution over a set of visible variables using hidden variables to express correlations. Conditional probability distributions can be exhibited by stochastic simulation for use in tasks such as classification.  Neural Networks: The paper presents learning procedures for stochastic feedforward networks, which are a type of neural network. The learning is done via a gradient-ascent method analogous to that used in Boltzmann machines, but due to the feedforward nature of the connections, the negative phase of Boltzmann machine learning is unnecessary. The paper also discusses the advantages of these networks over Boltzmann machines in pattern classification and decision making applications.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper discusses Genetic Programming (GP), which is a subfield of Genetic Algorithms. GP uses variable size representations as programs and evolves them through genetic operations such as mutation and crossover.   Reinforcement Learning: The paper analyzes the size and generality issues in programs evolved to control an agent in a dynamic and non-deterministic environment, as exemplified by the Pac-Man game. This is a classic problem in Reinforcement Learning, where an agent learns to take actions in an environment to maximize a reward signal.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the encoding of causal relationships in directed acyclic graphs, which is a common probabilistic method for representing causal relationships. The paper also mentions influence diagrams, which are a type of probabilistic graphical model.  Theory: The paper presents a definition of cause and effect in terms of decision-theoretic primitives, providing a theoretical foundation for causal reasoning. The paper also discusses the relationship between different representations of cause and effect, such as Pearl's representation and canonical form, and how they facilitate counterfactual reasoning.
Genetic Algorithms, Rule Learning.   Genetic Algorithms: The paper discusses the use of genetic programming (GP), which is a type of genetic algorithm, for learning rules to be used in fuzzy logic controllers (FLCs). The paper evaluates the potential of GP for this purpose and introduces structure-preserving genetic operators.  Rule Learning: The paper focuses on the problem of discovering a controller for mobile robot path tracking using fuzzy logic and evaluates the performance of incomplete rule-bases compared to a complete FLC designed by trial-and-error. The paper also introduces a constrained syntactic representation for the learned rules.
Probabilistic Methods.   Explanation: The paper discusses estimation in interval censoring models, which is a probabilistic method used in survival analysis. The paper describes nonparametric estimation of a distribution function and estimation of regression models using probabilistic methods such as maximum likelihood estimation and Fisher information.
Genetic Algorithms, Neural Networks, Probabilistic Methods.   Genetic Algorithms: The paper discusses genetic programming, which is a type of evolutionary algorithm that uses tree representations.   Neural Networks: The paper focuses on using genetic programming to evolve neural networks for solving a medical diagnosis problem and benchmark tasks.   Probabilistic Methods: The paper applies the Bayesian model-comparison framework to introduce a class of fitness functions with error and complexity terms, and presents an adaptive learning method that balances the model-complexity factor to evolve parsimonious programs.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper uses probabilistic methods to analyze the predictability of driven nonlinear acoustical systems. Specifically, the authors use state reconstruction techniques based on Bayesian probability theory to estimate the system's state and predict its future behavior.   Theory: The paper also falls under the category of Theory, as it presents a theoretical framework for analyzing the predictability of nonlinear systems. The authors develop a mathematical model for the system and use it to derive analytical expressions for the system's predictability. They also discuss the limitations of their approach and suggest directions for future research.
Probabilistic Methods.   Explanation: The paper is focused on dynamic probabilistic networks (DPNs) and presents a space-efficient algorithm for computing posterior distributions in these networks. The paper discusses the use of probabilistic methods for modeling complex stochastic processes and the inference task of monitoring in DPNs. The algorithm presented in the paper is also based on probabilistic methods.
Rule Learning, Theory.   The paper discusses the use of decision tree learners that prune rules based on either pessimistic or optimistic tests of their significance, which falls under the category of Rule Learning. The paper also presents a theoretical analysis of the continuum between naive pessimism and naive optimism in learning methods, which falls under the category of Theory.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes a hierarchical recurrent neural network to address the difficulty of extracting long-term dependencies from sequential data. The experiments confirm the advantages of such structures.  Probabilistic Methods: The paper also proposes a similar approach for HMMs and IOHMMs, which are probabilistic models. The authors suggest using a more general type of a-priori knowledge, namely that the temporal dependencies are structured hierarchically, to avoid the problem of extracting long-term dependencies.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents a new algorithm for finding low complexity neural networks with high generalization capability. The experiments described in the paper involve feedforward and recurrent neural networks.   Probabilistic Methods: The paper uses a Bayesian argument based on a Gibbs algorithm variant and a novel way of splitting generalization error into underfitting and overfitting error. The argument suggests that flat minima correspond to "simple" networks and low expected overfitting. The paper also mentions that their approach does not require Gaussian assumptions and has a prior over input/output functions, thus taking into account net architecture and training set.
Neural Networks, Control, Theory.   Neural Networks: The paper's title explicitly mentions "Neural Networks" as one of the topics covered. The paper discusses various types of neural networks and their applications in control systems.  Control: The paper focuses on the use of neural networks in control systems. It discusses how neural networks can be used to model and control complex systems, and provides examples of their use in various applications.  Theory: The paper also discusses the theoretical foundations of neural networks and their applications in control systems. It covers topics such as backpropagation, gradient descent, and optimization algorithms, which are fundamental to the theory of neural networks.
Rule Learning.   Explanation: The paper describes a method for generating new reactive rules to add to an original set, based on explanations of execution traces. The focus is on improving the comprehensibility, accuracy, and generality of the reactive plans, which are sets of reactive rules. Therefore, the paper is primarily concerned with rule learning, which involves generating rules from data or knowledge. Other sub-categories of AI, such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, and Theory, are not directly relevant to the content of the paper.
Neural Networks.   Explanation: The paper presents a method for developing value-ordering strategies in constraint satisfaction search using an evolutionary technique called SANE, in which individual neurons evolve to form a neural network. The paper also describes how the neural network was evolved in a chronological backtrack search to decide the ordering of cars in a resource-limited assembly line. Therefore, the paper belongs to the sub-category of AI known as Neural Networks.
Case Based  Explanation: The paper discusses conversational case-based reasoning shells and the task of case engineering, which involves carefully authoring cases according to design guidelines to ensure good performance. The focus is on capturing knowledge as cases rather than rules, and incrementally extending the case library. The paper does not discuss genetic algorithms, neural networks, probabilistic methods, reinforcement learning, rule learning, or theory.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper presents a computational model of movement skill learning, which involves the use of neural networks to simulate the learning process.  Reinforcement Learning: The paper discusses the improvement of skills through practice, which is a key aspect of reinforcement learning. Additionally, the paper presents two speed-accuracy tradeoff experiments where the model's performance fits human behavior quite well, which is another characteristic of reinforcement learning.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses the use of connectionist learning methods, which involve neural networks, to refine certainty-factor rule-bases. The authors explain how neural networks can be used to learn from data and improve the accuracy of rule-based systems.   Rule Learning: The paper also discusses the use of symbolic learning methods, which involve rule learning, to refine certainty-factor rule-bases. The authors explain how symbolic learning can be used to generate new rules and improve the interpretability of rule-based systems. The paper proposes a hybrid approach that combines both symbolic and connectionist learning methods to improve the accuracy and interpretability of rule-based systems.
Case Based, Rule Learning.   Case-based reasoning is explicitly mentioned in the abstract and throughout the paper as a component of the architecture. Rule-based reasoning is also a key component of the architecture, as the system uses a set of rules to obtain a preliminary answer for a given problem before drawing analogies from cases to handle exceptions to the rules. The paper does not mention any other sub-categories of AI.
Probabilistic Methods, Theory.   The paper presents a probabilistic method called "Stacked Density Estimation" for estimating the density of a given dataset. The authors provide a theoretical analysis of the method and its properties.
Reinforcement Learning, Rule Learning.   Reinforcement Learning is the main focus of the paper, as it compares two popular methods (Q-learning and classifier systems) for solving reinforcement learning problems. Rule Learning is also relevant, as the paper discusses the restrictions that need to be imposed on the classifier system in order to derive its equivalence with Q-learning.
Neural Networks.   Explanation: The paper introduces an artificial neural network that self-organizes based on Hebbian learning and negative feedback of activation. The focus of the paper is on the network's ability to form compact codings and identify filters sensitive to sparse distributed codes, which are both related to neural network architectures and learning algorithms. The other sub-categories of AI listed (Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, Theory) are not directly relevant to the content of the paper.
Theory.   Explanation: The paper presents a theoretical result about the limitations of certain classes of analytic functions in their ability to shatter sets of points. It does not involve any practical implementation or application of AI techniques such as neural networks, reinforcement learning, etc.
Case Based, Reinforcement Learning.   Case Based: The paper presents a methodology for the evaluation of case-based reasoning systems. It also includes a case study of a multistrategy case-based and reinforcement learning system for autonomous robotic navigation.   Reinforcement Learning: The paper includes a case study of a multistrategy case-based and reinforcement learning system for autonomous robotic navigation. The methodology presented in the paper enables the selection of the best system configuration for a given domain and the prediction of how the system will behave in response to changing domain and problem characteristics.
Case Based, Theory  Explanation:  - Case Based: The paper discusses the importance of a powerful case adapter for a case-based reasoner to use its knowledge flexibly. It also describes a representation system, memory organization, and adaptation process tailored to this requirement.  - Theory: The paper addresses the task of adapting abstract knowledge about planning to fit specific planning situations. It discusses the need to reconcile incommensurate representations of planning situations and proposes a solution.
Probabilistic Methods.   Explanation: The paper discusses the maximum likelihood estimator (MLE) for the proportional hazards model with current status data, which is a probabilistic method commonly used in survival analysis. The paper also considers the estimation of the asymptotic variance matrix for the MLE of the regression parameter, which involves probabilistic calculations. There is no mention or application of any of the other sub-categories of AI listed.
Case Based, Rule Learning  Explanation:   This paper belongs to the sub-category of Case Based AI because it discusses a computational model of advice taking using stories, which involves using past cases (stories) to inform decision-making in new situations. The paper also belongs to the sub-category of Rule Learning because it proposes an efficient solution to the problem of showing that the recommendations and appropriateness conditions of a story obtain in a new situation, which involves learning rules from past cases. Specifically, the proposal involves caching the results of determining the story's recommendations and appropriateness conditions, which can be seen as a form of rule learning.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper describes the use of a genetic algorithm to evolve a team of agents that can play a game. The algorithm uses a fitness function to evaluate the performance of each team and selects the best individuals to reproduce and create the next generation. This process is repeated until a satisfactory team is evolved.  Reinforcement Learning: The paper also discusses the use of reinforcement learning to train the agents in the team. The agents learn from their experiences in the game and adjust their behavior accordingly to maximize their rewards. The paper describes how the reinforcement learning algorithm is integrated with the genetic algorithm to create a more effective team.
Probabilistic Methods.   Explanation: The paper proposes a Bayesian noninformative approach for the estimation of normal mixtures, which is a probabilistic method. The paper also discusses the performance of MCMC algorithms, which are commonly used in probabilistic methods for inference.
Rule Learning, Theory.   Rule Learning is the most related sub-category of AI in this paper. The server provides interfaces to systems for inductive rule learning.   Theory is also related as the paper discusses the implementation of a WWW server in Common LISP to facilitate exploratory programming in the global hypermedia domain and to provide access to complex research programs, particularly artificial intelligence systems. The paper also discusses the generalization of automatic form-processing techniques developed for email servers to operate seamlessly over the Web.
Probabilistic Methods.   Explanation: The paper discusses the use of Bayesian model averaging, which is a probabilistic method, to account for model uncertainty in survival analysis.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper proposes a method for model averaging and selection that involves estimating optimal weighting factors for combining estimates from different bootstrap samples. This involves probabilistic reasoning and statistical inference.   Theory: The paper proposes a new method for model averaging and selection that is based on theoretical considerations about the information contained in training points that are left out of individual bootstrap samples. The paper also discusses the advantages of this method over Bayesian approaches, which involves theoretical arguments.
Theory.   Explanation: The paper focuses on the theoretical understanding of the success of adaptive reweighting and combining algorithms (arcing) such as Adaboost in reducing generalization error. It formulates prediction as a game and shows that existing arcing algorithms are algorithms for finding good game strategies. It also proves a bound on the generalization error for the combined predictors in terms of their maximum error that is sharper than bounds to date. The paper does not discuss any specific implementation or application of AI techniques such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Case Based, Reinforcement Learning  The paper belongs to the sub-category of Case Based AI because it discusses the use of analogies in problem-solving, which involves retrieving and adapting solutions from past cases. The authors also mention the importance of indexing and organizing cases for efficient retrieval.  The paper also touches on Reinforcement Learning, as it discusses the role of imitation in problem-solving. The authors argue that imitation can be a form of reinforcement learning, where the learner observes and imitates successful problem-solving strategies. They also mention the potential for reinforcement learning algorithms to incorporate imitation as a learning mechanism.
This paper belongs to the sub-category of AI known as Case Based. This is evident from the title and abstract, which both refer to "Conversational case-based reasoning" as the focus of the paper. The paper describes a system named NaCoDAE, which is a form of case-based reasoning where users input a partial problem description and the system responds with a ranked solution display based on stored cases. The paper also discusses the use of implication rules to support dialogue inferencing, which is a key aspect of case-based reasoning.
Rule Learning, Theory.   The paper discusses the problem of learning conjunctions of Horn clauses, which are a type of logical rule. The authors propose a method for learning these rules based on a decision tree algorithm, which falls under the category of rule learning. The paper also presents a theoretical analysis of the algorithm's performance and complexity, which falls under the category of theory.
Theory.   Explanation: The paper deals with the theoretical problem of identifying an unknown read-once formula using specific kinds of queries. It does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning. Rule learning is also not applicable as the paper does not involve learning rules from data. Therefore, the paper belongs to the sub-category of AI theory.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of a simple recurrent network for training the sequential RAAM.   Probabilistic Methods: The paper mentions the use of distributed patterns, which are a form of probabilistic representation.
Probabilistic Methods.   Explanation: The paper proposes and analyzes a distribution learning algorithm for Probabilistic Finite Suffix Automata, which is a subclass of probabilistic finite automata. The learning algorithm is motivated by real applications in man-machine interaction such as handwriting and speech recognition. The paper also discusses the theoretical properties of the algorithm, including its ability to efficiently learn distributions generated by the restricted sources. Therefore, the paper primarily belongs to the sub-category of Probabilistic Methods in AI.
Rule Learning, Theory.   The paper presents the clausal discovery engine claudien, which is a representative of the inductive logic programming paradigm and discovers regularities in data by means of first order clausal theories. This falls under the category of Rule Learning. The paper also discusses the declarative specification of the language bias, which determines the set of syntactically well-formed regularities, which is a theoretical aspect of the technique. Therefore, the paper also falls under the category of Theory.
Rule Learning, Theory.   The paper deals with the problem of estimating the quality of attributes in the context of machine learning from examples. It proposes to use RELIEFF, an extended version of the RELIEF algorithm, as an estimator of attributes for top-down induction of decision trees. This approach falls under the sub-category of Rule Learning.   The paper also discusses the limitations of current inductive machine learning algorithms and proposes a new approach that shows a strong relation between RELIEF's estimates and impurity functions, which are usually used for heuristic guidance of inductive learning algorithms. This aspect of the paper falls under the sub-category of Theory.
Genetic Algorithms.   Explanation: The paper discusses the application of Genetic Programming (a type of Genetic Algorithm) to chaotic time series prediction. It explores the dynamics of Genetic Programming and the importance of finding an optimal representation for the problem domain. The paper also proposes a modification to the crossover operator to improve search performance. While other sub-categories of AI may be relevant to the topic of time series prediction, the focus of this paper is on the use of Genetic Programming.
Rule Learning, Theory.   Explanation:  - Rule Learning: The paper proposes a new heuristic based on RELIEF for guiding ILP algorithms in the search for good conjunctions of literals. This is a form of rule learning, where the goal is to learn logical rules that capture relationships between variables in the data.  - Theory: The paper presents a new approach to ILP that introduces a declarative bias to keep the growth of the training set within linear bounds. This bias is a theoretical constraint on the search space of the ILP algorithm. The paper also discusses the advantages and deficiencies of the proposed approach, which is a theoretical analysis of the method.
Rule Learning, Theory.   The paper discusses the use of a non-myopic heuristic measure (ReliefF) for discretization of continuous attributes, which is a rule learning technique. The paper also compares this approach with other methods and evaluates their performance using several learning algorithms on real-world databases, which involves theoretical analysis and experimentation.
Reinforcement Learning.   Explanation: The paper presents a variation on the TD() algorithm, which is a type of reinforcement learning algorithm. The authors use this algorithm to learn an evaluation function for a chess program, which is a common application of reinforcement learning in game playing. The paper also discusses the relationship between their results and previous work in reinforcement learning in backgammon.
Probabilistic Methods.   Explanation: The paper deals with the Metropolis-Hastings algorithm, which is a probabilistic method for sampling from a target distribution. The paper also discusses the use of a sequential estimator of the density of the target distribution, which is another probabilistic method. The focus of the paper is on the asymptotic properties of the algorithm, which is a theoretical aspect of probabilistic methods.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents a model for the development of viewpoint invariant responses to faces from visual experience in a biological system using an attractor network model.   Probabilistic Methods: The paper compares the performance of two different representations for face recognition - independent component analysis (ICA) and principal component analysis (PCA) - and evaluates their effectiveness in recognizing faces across changes in pose. The paper also incorporates a lowpass temporal filter on unit activities in the attractor network model, which is a probabilistic method for smoothing out noisy input signals.
Probabilistic Methods.   Explanation: The paper discusses the use of Bayesian regression methods based on Dirichlet mixture models for curve fitting and regression smoothing. These methods involve probabilistic modeling of the data and estimation of uncertainties about the fitted regression functions. The paper also discusses the use of Markov chain simulation for computation, which is a common technique in Bayesian inference.
Genetic Algorithms, Theory.   Genetic Algorithms is the primary sub-category as the paper focuses on the analysis of the roles of population size and crossover in genetic algorithms. The paper presents theoretical and empirical results on the disruptive effect of different forms of crossover in genetic algorithms.   Theory is also a relevant sub-category as the paper summarizes recent theoretical results on the disruptive effect of multi-point crossover and uniform crossover. The paper also discusses the implications of the results on implementation issues and performance, and suggests several directions for further research.
Probabilistic Methods.   Explanation: The paper discusses the use of Mixture Density Networks (MDN), which are a class of neural networks with a rigorous probabilistic interpretation, for discriminant analysis in educational research. The focus is on the probabilistic aspects of the MDN approach and how it compares to traditional linear discriminant analysis. While neural networks are mentioned, the paper primarily deals with the probabilistic interpretation and application of MDNs.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper compares a simulated annealing algorithm (SASAT) with GSAT, a greedy algorithm, for solving satisfiability problems. Simulated annealing is a probabilistic method that uses a probability distribution to guide the search for a solution.   Theory: The paper presents an ablation study that helps to explain the relative advantage of SASAT over GSAT. The study involves systematically removing components of the SASAT algorithm to determine their contribution to its performance. This is a theoretical analysis of the algorithm.
The paper belongs to the sub-category of AI called Case Based.   Explanation: The paper discusses the use of Support Management Automated Reasoning Technology (SMART) for COMPAQ Customer Service, which is a case-based reasoning system. The paper does not mention any other sub-categories of AI.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper evaluates the discretization methods with respect to Naive-Bayesian classifiers, which are probabilistic models.  Rule Learning: The paper presents a discretization method based on the C4.5 decision tree algorithm, which is a rule learning algorithm. The paper also compares this method to an existing entropy-based discretization algorithm and a recently proposed error-based technique.
Theory. This paper presents a theoretical construction for a multiple-output system and proves its minimality and observability properties. There is no explicit use or discussion of any specific sub-category of AI such as neural networks or reinforcement learning.
Theory. The paper presents a theoretical analysis of the stability of linear systems with bounded controls, using mathematical tools such as Lyapunov functions and matrix inequalities. There is no mention or application of any specific sub-category of AI such as neural networks or reinforcement learning.
Probabilistic Methods.   Explanation: The paper discusses the use of exploratory statistical methods and robust estimators for detecting gross errors in data reconciliation. These methods are based on probabilistic models and are designed to be insensitive to departures from ideal statistical distributions. The paper does not mention any other sub-categories of AI such as neural networks, genetic algorithms, or reinforcement learning.
Probabilistic Methods, Neural Networks, Theory.   Probabilistic Methods: The paper discusses the benefits of using a given hierarchy over base classes to learn accurate multi-category classifiers. This involves exploring the benefits of learning category-discriminants in a hard top-down fashion and comparing this to a soft approach which shares training data among sibling categories. These approaches involve probabilistic methods for classification.  Neural Networks: The paper discusses the potential benefits of using a class hierarchy as prior knowledge that can help one learn a more accurate classifier. This involves exploring the benefits of learning category-discriminants in a hard top-down fashion and comparing this to a soft approach which shares training data among sibling categories. These approaches involve neural networks for classification.  Theory: The paper investigates the potential benefits of using a given hierarchy over base classes to learn accurate multi-category classifiers for domains such as classifying documents into subject categories under the library of congress scheme or classifying world-wide-web documents into topic hierarchies. The paper explores the benefits of learning category-discriminants in a hard top-down fashion and compares this to a soft approach which shares training data among sibling categories. The paper also discusses the reasons for the improvement in prediction accuracy associated with using a hierarchy, which can be subtle and dependent on the expressiveness of a hypothesis class. These discussions involve theoretical considerations.
Rule Learning, Theory.   The paper discusses a method for pruning decision trees, which is a type of rule learning algorithm. The paper also presents a theoretical analysis of the performance of the proposed method.
Theory.   Explanation: The paper presents theoretical results and algorithms for PAC-learning geometric concepts in a constant-dimensional space that are robust against malicious misclassification noise. The paper does not involve any implementation or application of AI techniques such as neural networks, genetic algorithms, or reinforcement learning.
Theory  Explanation: The paper is focused on developing a new criteria for decision tree pruning based on theoretical concepts such as uniform convergence and the Vapnik-Chervonenkis dimension. The authors also mention that their method is theoretically sound and well motivated from the theory side. While the paper does mention the performance of their method in practice, the main focus is on the theoretical basis for their approach. Therefore, the sub-category of AI that this paper belongs to is Theory.
Neural Networks.   Explanation: The paper discusses the identifiability of weights in continuous-time feedback neural networks, indicating that the paper belongs to the sub-category of AI that deals with neural networks. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods, Theory  The paper belongs to the sub-category of Probabilistic Methods because it utilizes probabilistic techniques to adaptively integrate functions with dominant peaks. The authors propose a subregion-adaptive integration algorithm that uses a probabilistic model to estimate the location and height of the dominant peak in each subregion. This probabilistic model is based on the assumption that the function values in each subregion follow a Gaussian distribution.  The paper also belongs to the sub-category of Theory because it presents a theoretical analysis of the proposed algorithm. The authors derive the convergence rate of the algorithm and prove that it achieves the optimal convergence rate for functions with a dominant peak. They also provide a complexity analysis of the algorithm and show that it has a polynomial time complexity.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms: The paper discusses the use of evolutionary algorithms, which are a type of genetic algorithm, to solve combinatorial problems. The authors explain how the algorithm works by creating a population of potential solutions and using selection, crossover, and mutation to evolve the population towards better solutions.   Probabilistic Methods: The paper also discusses the use of probabilistic methods, such as Monte Carlo simulation, to evaluate the fitness of potential solutions. The authors explain how these methods can be used to estimate the probability of a solution being optimal, which can be used to guide the search towards better solutions.
Probabilistic Methods.   Explanation: The paper explicitly mentions the use of probabilistic networks for protein sequence analysis and secondary structure prediction. The authors also highlight the advantages of this approach, such as the ability to perform detailed experiments with different models and the efficiency of both training and prediction. The paper also emphasizes the precise quantitative semantics of the predictions generated by their probabilistic method, which is not shared by other classification methods. Therefore, this paper belongs to the sub-category of AI known as Probabilistic Methods.
Theory.   Explanation: The paper focuses on theoretical analysis of the leave-one-out cross-validation estimate of the generalization error and the bounds on its error. It introduces a new notion of error stability and applies it to various classes of learning algorithms, including training error minimization procedures and Bayesian algorithms. The paper does not discuss the implementation or application of any specific AI sub-category such as neural networks or reinforcement learning.
Case Based, Rule Learning  Explanation:   - Case Based: The paper describes a program that uses observations of previous states of the world as guides for conducting experiments. These observations are used to support or weaken the case for a generalisation of a concept. This is a characteristic of case-based reasoning, where past experiences are used to solve new problems.  - Rule Learning: The program uses a partial matching algorithm to find substitutions that enable two states to be unified. The generalisation of the two states is their unifier. This process involves learning rules that can be applied to new situations.
Genetic Algorithms.   Explanation: The paper presents a system that mirrors the conceptual makeup of a GP system, which is a type of Genetic Algorithm. The paper discusses the use of Genetic Programming techniques as a domain-independent problem solving tool, which is a key characteristic of Genetic Algorithms. The title of the paper also includes "Genetic Programming System," further indicating its focus on this sub-category of AI.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms: The paper investigates the effectiveness of different recombination operators in Evolution Strategies, which is a type of Genetic Algorithm. The authors compare the performance of different multi-parent recombination operators, such as the weighted average and the intermediate recombination, on a set of benchmark problems.   Probabilistic Methods: Evolution Strategies are a type of probabilistic optimization method that uses a population of candidate solutions and probabilistic operators to generate new solutions. The paper focuses on the use of multi-parent recombination operators, which are probabilistic operators that combine the genetic material of multiple parents to generate new offspring. The authors use statistical methods to analyze the performance of different recombination operators and draw conclusions about their effectiveness.
Genetic Algorithms, Neural Networks, Reinforcement Learning.   Genetic Algorithms: The paper discusses the co-evolution of different adaptive behaviors in competing species of predators and preys through the use of simulated mobile robots with infrared proximity sensors. The robots' neurocontrollers have the same architecture and genetic length, but different types of variability during life are compared. This is an example of using genetic algorithms to evolve and optimize behaviors.  Neural Networks: The paper mentions that the robots have neurocontrollers, which are likely implemented as artificial neural networks. The different types of variability during life are applied to the weights and biases of the neural networks, which can affect their behavior.  Reinforcement Learning: The paper discusses how the predators and preys adapt their behaviors through co-evolution in response to each other's actions. The prey exploits noisy controllers to generate random trajectories, while the predator benefits from directional-change controllers to improve pursuit behavior. This is an example of reinforcement learning, where the robots learn from the feedback they receive from their environment (i.e. the other robot).
Neural Networks.   Explanation: The paper explicitly mentions that the nonlinear systems being studied are relevant to neural networks research. The concept of observability is also commonly used in the analysis and design of neural networks. None of the other sub-categories of AI are mentioned or implied in the text.
Neural Networks. This paper belongs to the sub-category of Neural Networks as it discusses the calculation of second derivatives in connectionist networks, which are a type of neural network. The paper reviews and develops algorithms for calculating second derivatives in feed-forward networks with arbitrary activation functions, networks, and error functions.
Case Based, Theory.   Case Based: The paper discusses the use of analogical reasoning in functional program synthesis, which involves finding solutions to new problems by adapting solutions from similar problems encountered in the past. This is a key characteristic of case-based reasoning.   Theory: The paper presents a theoretical framework for applying analogical reasoning to functional program synthesis, including the use of a graph metric and the Structure Mapping Engine. The authors also discuss the implications of their experimental results for the broader field of AI.
This paper belongs to the sub-category of AI called Case Based.   Explanation: The paper discusses the use of examples to learn and make decisions, which is a key characteristic of Case Based reasoning. The authors compare two approaches to learning from examples: reminding and heuristic switching. They argue that reminding is a more effective approach, as it involves retrieving similar cases from memory and adapting them to the current situation. This process is similar to how Case Based reasoning works, where past cases are used to solve new problems. Therefore, the paper is most closely related to the Case Based sub-category of AI.
Case Based, Rule Learning  Explanation:   - Case Based: The paper discusses generalization styles using prototypes, which can be seen as a form of case-based reasoning. The system learns from specific examples (prototypes) and generalizes to new cases based on their similarity to the prototypes. - Rule Learning: The paper discusses how the prototype styles of generalization can be used to provide accurate generalization for a wide variety of applications. This involves learning rules or patterns from the training data that can be applied to new cases.
Neural Networks.   Explanation: The paper focuses on the system-theoretic aspects of continuous-time recurrent neural networks with sigmoidal activation functions. It discusses their universal approximation properties, controllability, observability, parameter identifiability, and minimality. It also mentions facts regarding the computational power of recurrent nets. These topics are all related to the study of neural networks.
Neural Networks.   Explanation: The paper specifically deals with the controllability of continuous-time recurrent neural networks, which are a type of neural network. The paper does not discuss any other sub-category of AI.
Neural Networks.   Explanation: The paper discusses Artificial Neural Networks (ANNs) and specifically focuses on a type of learning model called Adaptive Self Organizing Concurrent Systems (ASOCS), which has a dynamic topology. The paper introduces Location-Independent Transformations (LITs) as a strategy for implementing learning models with dynamic topologies efficiently in parallel hardware. The Location-Independent ASOCS (LIA) model is presented as a specific LIT for ASOCS Adaptive Algorithm 2. Therefore, the paper primarily belongs to the sub-category of Neural Networks in AI.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as it discusses the use of the Bellman equation in controlling Markov decision processes.   Theory is also relevant, as the paper presents conditions and algorithms to ensure a single, optimal solution to the Bellman equation.
Case Based  Explanation: The paper discusses the issue of retrieving appropriate cases from memory in a case-based system and proposes a method for learning structural indices to design cases. The paper also mentions the use of similarity-based learning for index generalization, which is a common technique in case-based reasoning.
Case Based, Rule Learning  Explanation:  This paper belongs to the sub-category of Case Based AI because it discusses a model-based approach to analogical reasoning and learning in design, which involves using past cases as a basis for solving new design problems. The authors describe how their approach involves representing design cases as structured models, and using these models to reason about similarities and differences between cases.   This paper also belongs to the sub-category of Rule Learning AI because it discusses how the authors' approach involves learning rules from past design cases, which can then be used to guide the solution of new design problems. The authors describe how their approach involves using a rule induction algorithm to learn rules from a set of design cases, and how these rules can be used to generate new design solutions.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents OXBOW, an unsupervised learning system that constructs classes of observed movements. This system is based on neural networks, as it uses a representational format with a temporal structure to relate components of a single complex movement.  Probabilistic Methods: The paper mentions that OXBOW is an unsupervised learning system, which means that it uses probabilistic methods to learn from the data. The system builds abstract movement concepts with appropriate component structure, allowing it to predict the latter portions of a partially observed movement. This prediction is based on the probability of the observed movement belonging to a certain class.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper discusses the use of an AQ-type learning algorithm to search for the best hypothesis in a given representation space.   Probabilistic Methods are also present in the text as the paper discusses the use of data-driven constructive induction (DCI) to search for a better representation space by analyzing input examples (data). DCI uses two classes of representation space improvement operators: constructors and destructors, which are probabilistic in nature.
Neural Networks, Theory.   Neural Networks: The paper discusses using the Support Vector Algorithm to train three different types of handwritten digit classifiers, which are all examples of neural networks.   Theory: The paper presents a theoretical finding that small subsets of a data base contain all the information necessary to solve a given classification task, and that the theory allows us to predict the classifier with the best generalization ability based on characteristics of the learning machines.
Probabilistic Methods, Neural Networks, Theory.   Probabilistic methods are discussed in the paper as one of the two classes of reconstruction methods, and Bayesian methods are specifically mentioned as being especially accurate when a continuity constraint is enforced.   Neural networks are mentioned as a possible implementation for the reconstruction methods discussed in the paper, and the paper suggests that the brain could feasibly use such a neural network architecture to solve related problems.   Theory is also a relevant sub-category, as the paper discusses the theoretical values of the minimal achievable reconstruction errors and how they quantify how accurately a physical variable is encoded in the neuronal population. The paper also discusses how reconstruction is useful in providing insight into how the brain might use distributed representations in solving related computational problems.
Neural Networks, Theory.   Neural Networks: The paper presents a model of the dynamics of the head-direction cell ensemble, which is a type of neural network found in the limbic system of rats. The model explains the stability of the network's activity profile and its ability to shift dynamically, and it is based on synaptic weight distribution components with even and odd symmetry.   Theory: The paper presents a theoretical framework for understanding how the head-direction cell ensemble represents spatial orientation. The model is based on attractor dynamics and integrates self-motion information to derive a world-centered representation from observer-centered sensory inputs. The paper also discusses the modality-independence of the internal representation and the correction for cumulative error by putative local-view detectors.
Theory.   Explanation: The paper presents a theoretical framework for analyzing learning algorithms by decomposing the expected misclassification rate into bias and variance components. It does not focus on any specific AI sub-category such as neural networks or reinforcement learning, but rather provides a general tool for understanding and evaluating supervised classification learning algorithms.
Genetic Algorithms.   Explanation: The paper describes the use of genetic programming to evolve sorting networks, which involves the use of genetic algorithms to generate and optimize solutions. The Xilinx XC6216 field-programmable gate array is used as a platform for implementing and testing the evolved networks. While other sub-categories of AI may also be relevant to this work (such as neural networks for classification tasks), the focus of the paper is on the use of genetic programming and evolutionary algorithms.
Theory.   Explanation: The paper proposes a theoretically justifiable algorithm for obtaining a parsimonious solution to a corrupted linear system. The focus is on the development of a linear-programming-based algorithm that minimizes the number of nonzero elements in x and the error k Ax b p k 1. The paper does not involve any application of case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Neural Networks, Theory.   Neural Networks: The paper presents simulations of a feed-forward and a recurrent neural network to demonstrate how viewpoint invariant representations of faces can be developed from visual experience. The simulations explore the interaction of temporal smoothing of activity signals with Hebbian learning.  Theory: The paper presents a theoretical framework for how viewpoint invariant representations of faces can be developed from visual experience by capturing the temporal relationships among the input patterns. The simulations are based on the theoretical principles of temporal association and Hebbian learning.
Neural Networks.   Explanation: The paper discusses the use of neural networks for data mining tasks, specifically focusing on approaches for producing comprehensible models and reducing training times. The paper does not mention any other sub-categories of AI.
Neural Networks, Theory.   Neural Networks: The paper proposes a statistical theory for overtraining in realizable stochastic neural networks trained with Kullback-Leibler loss. The analysis focuses on the asymptotic case and considers early stopping and cross-validation stopping.  Theory: The paper presents analytical findings on the asymptotic gain in generalization error with early stopping and cross-validation stopping. It also answers the question of how to divide examples into training and testing sets to obtain optimum performance. The large scale simulations done on a CM5 are in agreement with the analytical findings.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses clustering, which is a probabilistic method used for unsupervised learning of patterns and clusters in a given database. The k-Median Algorithm is also mentioned, which is a probabilistic algorithm used for clustering.  Theory: The paper proposes mathematical programming formulations for feature selection, clustering, and robust representation problems. These formulations are theoretically justifiable and computationally implementable in a finite number of steps. The paper also discusses the generalization of leaner models, which is a theoretical concept in machine learning.
Genetic Algorithms.   Explanation: The paper is solely focused on providing an overview of Genetic Algorithms, their fundamentals, and their applications. The text discusses the basic concepts of Genetic Algorithms, such as selection, crossover, and mutation, and how they are used to solve optimization problems. The paper also covers various applications of Genetic Algorithms, such as in engineering, finance, and medicine. Therefore, the paper belongs to the sub-category of AI known as Genetic Algorithms.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper proposes a stochastic search method based on simulated annealing, which is a probabilistic method for solving combinatorial optimization problems.   Rule Learning: The paper describes the implementation of the stochastic search method in a rule learning system called ATRIS. The method uses appropriate operators for structuring the search space and heuristic pruning to handle imperfect data.
Neural Networks, Theory.   Neural Networks: The paper discusses the behavior of a single neuron with the logistic function as the transfer function, which is a fundamental building block of neural networks.   Theory: The paper presents a theoretical result about the number of local minima of the error function based on the square loss, which is relevant for understanding the behavior of optimization algorithms used in neural networks.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses evolutionary algorithms (EAs) which are a type of genetic algorithm. It specifically focuses on the effects of design choices regarding local selection algorithms in parallel, spatially distributed populations.  Theory: The paper applies formal analysis techniques to study the effects of neighborhood size and shape on local selection algorithms. It aims to provide a clearer understanding of these effects, which can inform the design of future EAs.
Probabilistic Methods.   Explanation: The paper discusses techniques for probabilistic reasoning in Bayesian networks, specifically focusing on resolving tradeoffs between competing qualitative influences. The two approaches presented involve combining qualitative and numeric probabilistic reasoning to infer qualitative relationships between nodes in the network. The paper does not discuss case-based reasoning, genetic algorithms, neural networks, reinforcement learning, rule learning, or theory.
Genetic Algorithms.   Explanation: The paper explicitly discusses parallel genetic algorithms and surveys various approaches and implementations. The other sub-categories of AI are not mentioned or discussed in the paper.
Probabilistic Methods.   Explanation: The paper discusses a modification of the standard Gaussian distribution, which is a probabilistic method. The rectified Gaussian distribution is also used to model pattern manifolds, which is a probabilistic approach to modeling data.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper introduces a neural network learning rule that is transformed into a fixed-point iteration algorithm for Independent Component Analysis. The algorithm is simple and fast to converge to the most accurate solution allowed by the data.   Probabilistic Methods: The algorithm tends to all non-Gaussian independent components, regardless of their probability distributions. The convergence of the algorithm is rigorously proven, and the convergence speed is shown to be cubic.
Neural Networks, Theory.   Neural Networks: The paper discusses a variant of the BCM learning rule, which is a type of Hebbian learning commonly used in neural networks. The paper also presents a practical neuronal framework for detecting suspicious events.  Theory: The paper reiterates Barlow's seminal work on minimal entropy codes and unsupervised learning, and presents mathematical results suggesting optimal minimal entropy coding.
Genetic Algorithms.   Explanation: The paper explicitly discusses genetic algorithms as the focus of the research. The title of the paper also includes "Genetic Algorithms." While other sub-categories of AI may be mentioned or used in conjunction with genetic algorithms, the primary focus and contribution of the paper is on extending selection mechanisms in genetic algorithms.
Genetic Algorithms.   Explanation: The paper is solely focused on explaining the concept and implementation of Genetic Algorithms, which is a sub-category of AI. The author provides a detailed tutorial on how to use Genetic Algorithms to solve optimization problems. The paper does not discuss any other sub-category of AI.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper discusses the use of an AQ-type learning algorithm to search for the best hypothesis in a given representation space.   Probabilistic Methods are also present in the text as the paper discusses the use of data-driven constructive induction (DCI) to search for a better representation space by analyzing input examples (data). DCI uses two classes of representation space improvement operators: constructors and destructors, which are probabilistic in nature.
Neural Networks, Theory.  Neural Networks: The paper discusses the Nonlinear PCA Learning Rule, which is a type of neural network. The authors explain how this learning rule can be used for signal separation, which is a common application of neural networks.  Theory: The paper provides a mathematical analysis of the Nonlinear PCA Learning Rule and its application to signal separation. The authors derive equations and provide proofs to support their claims. This demonstrates a focus on theory rather than practical implementation.
Theory.   Explanation: The paper presents an analysis of the Relief algorithm and its extension ReliefF, leading to an adaptation for regression problems. The focus is on the theoretical aspects of attribute estimation and the performance of the algorithm in different conditions, rather than on the application of a specific AI sub-category. While the paper mentions classification problems, it does not delve into the use of probabilistic methods, rule learning, or other sub-categories.
Rule Learning, Theory.   The paper discusses the emerging research area of Inductive Logic Programming, which is a sub-category of AI that focuses on learning rules from data. The paper discusses the need for sound principles from both logic and statistics, which are key components of rule learning. The paper also discusses the unifying framework for Inverse Resolution and Relative Least General Generalisation, which are both rule learning algorithms. Therefore, the paper is most related to Rule Learning.   The paper also discusses the background and goals of Inductive Logic Programming, which falls under the category of Theory in AI. The paper discusses the limitations of its parent subjects, Logic Programming and Machine Learning, and the need for a new approach that overcomes these limitations. Therefore, the paper is also related to Theory in AI.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper focuses on Bayesian inference, which is a probabilistic approach to modeling. The authors discuss the principles of Bayesian inference and several approximate implementations.  Neural Networks: The paper specifically applies Bayesian methods to feed forward neural network models. The authors discuss the advantages of Bayesian methods over traditional frequentist model training and selection.
Probabilistic Methods.   Explanation: The paper presents an algorithm for learning Bayesian belief networks from databases, which is a probabilistic method for modeling uncertain relationships between variables. The algorithm is based on the computation of mutual information of attribute pairs, which is a common approach in probabilistic methods for feature selection and network structure learning. The paper also discusses the properties and guarantees of the algorithm in terms of its ability to generate a belief network close to the underlying model and its complexity in terms of conditional independence tests.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of adaptive heuristics for solving the Job Shop Scheduling Problem, which involves probabilistic methods such as simulated annealing and tabu search. The statistical analysis of the search spaces also reveals the impacts of inherent properties of the problem on adaptive heuristics.  Theory: The paper presents a computational study for the Job Shop Scheduling Problem, with emphasis on the structure of the solution space as it appears for adaptive search. The statistical analysis of the search spaces also reveals the impacts of inherent properties of the problem on adaptive heuristics, which contributes to the theoretical understanding of the problem and its solution methods.
Probabilistic Methods.   Explanation: The paper presents an algorithm for constructing Bayesian belief networks, which is a probabilistic graphical model. The algorithm is based on the computation of mutual information of attribute pairs, which is a common approach in probabilistic methods for constructing Bayesian networks. The paper also discusses the correctness proof and analysis of computational complexity, which are important aspects of probabilistic methods.
Neural Networks.   Explanation: The paper discusses the implementation and testing of a technique called Support Vector Machine (SVM) for regression, which is a type of neural network. The paper also compares the performance of SVM with other approximation techniques, including polynomial and rational approximation, local polynomial techniques, Radial Basis Functions, and Neural Networks. Therefore, the paper belongs to the sub-category of AI known as Neural Networks.
Neural Networks.   Explanation: The paper specifically describes an implementation of a neural network using multi-chip modules as the interconnect medium. The paper also discusses the requirements for dense interconnect in neural network systems and how MCM technology fulfills this requirement. While other sub-categories of AI may be mentioned or used in conjunction with neural networks, the focus of this paper is on the neural network implementation using MCMs.
Rule Learning.   Explanation: The paper discusses the process of specializing recursive predicates by modifying the rules (clauses) of a logical program. The algorithm presented in the paper is based on rule learning techniques, where the rules are modified in order to exclude negative examples while preserving positive examples. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Theory.
Rule Learning.   Explanation: The paper discusses a method for specializing logic programs by pruning SLD-trees based on positive and negative examples. This falls under the category of rule learning, which involves learning rules or logical expressions from data. The paper does not discuss case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or theory.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses how cognitive maps can be viewed in the context of more recent formalisms for qualitative decision modeling, which can facilitate the development of more powerful inference procedures. This suggests the use of probabilistic methods for inference in cognitive maps.   Theory: The paper discusses the theoretical foundations of cognitive mapping as a qualitative decision modeling technique developed by political scientists over twenty years ago. It also discusses how recent formalisms for qualitative decision modeling provide a firm semantic foundation for cognitive maps.
Case Based.   Explanation: The paper discusses a new method for continuous case-based reasoning and its application to an autonomous navigation system. The article also concludes with a general discussion of case-based reasoning issues addressed by this research. There is no mention of genetic algorithms, neural networks, probabilistic methods, reinforcement learning, rule learning, or theory in the text.
Rule Learning, Theory.   Rule Learning is the most related sub-category as the paper discusses the East-West Challenge, which was a competition to discover the simplest classification rules for train-like structured objects. The paper analyzes the results obtained by different learning programs, including the AQ family of learning programs, which are rule-based.   Theory is also relevant as the paper presents ideas for further research, including the development of a measure of knowledge complexity that would adequately capture the cognitive complexity of knowledge. The authors briefly discuss a preliminary measure of such cognitive complexity, called Ccomplexity, which is a theoretical concept.
Probabilistic Methods.   Explanation: The paper presents an algorithm for constructing Bayesian network structures from data, which is a probabilistic method in AI. The paper discusses the use of conditional independence tests and Bayesian networks, which are both probabilistic methods.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper proposes a new criterion for model selection that involves adjusting the training error by the average covariance of the predictions and responses. This criterion can be applied to general prediction problems and rules, including regression and classification. The use of covariance suggests a probabilistic approach to model selection.  Theory: The paper presents a new criterion for model selection and relates it to other model selection procedures. It also provides a measure of the effective number of parameters used by an adaptive procedure. These aspects suggest a theoretical approach to model selection.
Neural Networks.   Explanation: The paper introduces the magnetic neural gas (MNG) algorithm, which is a type of neural network. The algorithm extends unsupervised competitive learning with class information to improve the positioning of radial basis functions. The paper discusses the performance of MNG on various data sets, demonstrating its promise as a neural network algorithm.
Rule Learning, Theory.   The paper describes a production system architecture, which is a type of rule-based system that uses a set of rules to guide problem-solving behavior. The authors also discuss the theoretical underpinnings of their model, including the use of analogical reasoning and the role of working memory in problem solving. While other sub-categories of AI may be relevant to this research (such as neural networks for modeling cognitive processes), the focus of the paper is primarily on rule-based approaches and theoretical frameworks.
Probabilistic Methods.   Explanation: The paper discusses the use of probabilistic methods, specifically Laplace's method, to integrate out incidental parameters associated with measurement errors and obtain approximate confidence contours for model parameters. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes a novel architecture and set of learning rules for cortical self-organization based on the idea that multiple information channels can modulate one another's plasticity. The model is implemented in a biologically feasible, hierarchical neural circuit.  Probabilistic Methods: The model uses a maximum likelihood cost function to allow the scheme to be implemented in a biologically feasible, hierarchical neural circuit. The simulations demonstrate the utility of temporal context in modulating plasticity and the model learns a representation that categorizes people's faces according to identity, independent of viewpoint, by taking advantage of the temporal continuity in image sequences. The model also learns a two-tiered representation, starting with a coarse view-based clustering and proceeding to a finer clustering of more specific stimulus features.
Ensemble Learning, Theory.   Ensemble Learning is the main sub-category of AI that this paper belongs to, as it discusses the AdaBoost algorithm and its performance when using a subset of the hypotheses. This is a type of ensemble learning approach, where multiple individual hypotheses are combined to form a composite hypothesis.   Theory is also a relevant sub-category, as the paper provides insights into the behavior of AdaBoost and how it can be optimized by selecting a subset of the hypotheses. The paper presents experimental results to support these insights, which can be used to improve the performance of AdaBoost in practice.
Neural Networks, Theory.  Neural Networks: The paper discusses the role of synaptic plasticity in visual cortical ocular dominance, which involves changes in the strength of connections between neurons. This is a key concept in neural network models of learning and memory.  Theory: The paper presents a theoretical framework for understanding how afferent excitatory and lateral inhibitory synaptic plasticity contribute to ocular dominance. It proposes a mathematical model that can be used to simulate the effects of these processes on neural activity in the visual cortex.
Neural Networks, Theory.   Neural Networks: The paper presents a novel account of the effects of pharmacological treatments on cortical plasticity based on the EXIN synaptic plasticity rules, which enhance the efficiency, discrimination, and context-sensitivity of a neural network's representation of perceptual patterns.   Theory: The paper proposes a new model of plasticity in lateral inhibitory pathways and makes predictions based on this model. It also discusses previous models and their limitations, and presents a theoretical framework for understanding the effects of pharmacological treatments on cortical plasticity.
Theory. The paper presents algorithms and a new complexity measure for the exact learnability of concepts represented by unions of boxes in d-dimensional Euclidean space using membership and equivalence queries. The focus is on theoretical analysis of the learnability and complexity of the problem, rather than practical implementation or application of specific AI techniques.
Genetic Algorithms, Neural Networks, Probabilistic Methods.   Genetic algorithms are mentioned in the abstract and are the focus of the paper, as they are used for pruning a multilayer perceptron. Neural networks are also mentioned in the abstract and are the subject of the paper, as the focus is on pruning a multilayer perceptron. Probabilistic methods are mentioned in the abstract and in the paper, as simulated annealing is used as a stochastic optimization technique.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of belief propagation and revision algorithms, which are probabilistic methods for inference in graphical models.  Theory: The paper presents theoretical results on the convergence and optimality of belief propagation and revision algorithms in networks with loops.
Genetic Algorithms, Genetic Programming.   The paper describes the use of a Genetic Algorithm with an order or permutation chromosome to find optimal schedules for a power network maintenance problem. It then goes on to use Genetic Programming to evolve the best known schedule, starting from hand-coded heuristics used with the GA. Therefore, both Genetic Algorithms and Genetic Programming are present in the text.
Genetic Algorithms.   Explanation: The paper specifically discusses genetic programming, which is a class of evolutionary algorithms based on genetic algorithms. The methodology presented in the paper uses constraints to limit the search space in genetic programming, which is a common technique in genetic algorithms. Therefore, this paper belongs to the sub-category of AI known as Genetic Algorithms.
Theory.   Explanation: The paper presents necessary and sufficient conditions for observability of a specific class of linear systems, without using any AI techniques such as neural networks, genetic algorithms, or reinforcement learning. The paper is focused on theoretical analysis of the observability of these systems, and does not involve any practical implementation or application of AI methods. Therefore, the paper belongs to the sub-category of AI theory.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the use of probability theory in reasoning from data, specifically Bayesian probability. The authors explain how Bayesian probability can be used to update beliefs based on new data and how it can be used to make predictions.   Rule Learning: The paper also discusses the use of rules in reasoning from data. The authors explain how rules can be learned from data using techniques such as decision trees and association rule mining. They also discuss the limitations of rule-based approaches and the need for more flexible methods.
Theory.   Explanation: The paper presents a system, Forte, which refines first-order Horn-clause theories using a variety of different revision techniques. The focus is on improving an existing knowledge base using learning methods, which falls under the category of theory refinement. The paper does not mention any of the other sub-categories of AI listed in the question.
This paper belongs to the sub-category of AI known as Neural Networks. Neural networks are present in the text as the paper discusses the use of artificial neural networks for time series analysis. The paper describes how neural networks can be used to model and predict time series data, and how they can be trained using backpropagation and other techniques. The paper also discusses the advantages and limitations of using neural networks for time series analysis.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper proposes a feature generation method (FGEN) that creates Boolean features based on heuristically selected collections of subsequences. These features can be seen as rules that check for the presence or absence of certain patterns in the sequence data.   Probabilistic Methods are also present in the text as the paper evaluates the performance of FGEN in combination with two commonly used learning systems (C4.5 and Ripper) that are based on probabilistic methods. The accuracy of these systems is improved when the new features generated by FGEN are added to the existing representations of sequence data.
Theory.   Explanation: The paper focuses on the theoretical problem of PAC learning intersections of halfspaces with membership queries. It does not involve any practical implementation or application of AI techniques such as neural networks, genetic algorithms, or reinforcement learning. The paper presents a theoretical framework for analyzing the learnability of this problem and proposes an algorithm based on the theory of Fourier analysis. Therefore, the paper belongs to the sub-category of AI theory.
Genetic Algorithms.   Explanation: The paper focuses on the application of genetic algorithms to the Assembly Line Balancing Problem, and extensively discusses the use of genetic algorithms for combinatorial optimization. While other sub-categories of AI may also be relevant to this problem, such as probabilistic methods or reinforcement learning, the paper primarily focuses on the use of genetic algorithms.
Case Based.   Explanation: The paper presents the application of Case-Based Reasoning methods to the KOSIMO data base of international conflicts. A Case-Based Reasoning tool - VIE-CBR has been developed and used for the classification of various outcome variables, like political, military, and territorial outcome, solution modalities, and conflict intensity. In addition, the case retrieval algorithms are presented as an interactive, user-modifiable tool for intelligently searching the conflict data base for precedent cases.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper proposes a new approach for handling higher order uncertainty, including the Bayesian approach, which is a probabilistic method.  Theory: The paper proposes a new theoretical framework for handling higher order uncertainty, which is based on the concept of confidence as higher order uncertainty. The paper also discusses the limitations of existing approaches and the need for a new theoretical framework.
Case Based, Theory.   Case-based reasoning is the main focus of the paper, and the authors explore different algorithms and approaches to case-based learning. The paper also discusses the role of inductive bias in learning, which is a fundamental concept in machine learning theory.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses the changes and developments in the field of evolutionary computation, which includes genetic algorithms as a subfield. The paper also mentions the injection of new ideas challenging old tenets, which is a characteristic of genetic algorithms as they evolve and adapt to new problems.  Theory: The paper attempts to summarize the emergent properties of the field of evolutionary computation, which involves discussing common themes and important open issues. This is a theoretical approach to understanding the state of the field.
Case Based, Probabilistic Methods.   Case Based: The paper discusses memory-based reasoning (MBR) algorithms, which are a type of case-based reasoning. MBR uses specific cases to perform classification, as opposed to summarizing the data probabilistically like Bayesian methods.   Probabilistic Methods: The paper also discusses Bayesian classifiers, which are a type of probabilistic method. The comparison between MBR and Bayesian classifiers highlights the differences in their probabilistic assumptions about the data. Additionally, the paper mentions time-series data generated by Markov models, which are a type of probabilistic model.
Case Based, Rule Learning  Explanation:   This paper belongs to the sub-category of Case Based AI because it describes the K-nearest-neighbor decision rule, which assigns an object of unknown class to the plurality class among the K labeled "training" objects that are closest to it. This is a classic example of case-based reasoning, where new cases are classified based on their similarity to previously observed cases.  Additionally, this paper also belongs to the sub-category of Rule Learning because it describes new types of K-nearest-neighbor procedures that estimate the local relevance of each input variable, or their linear combinations, for each individual point to be classified. This information is then used to separately customize the metric used to define distance from that object in finding its nearest neighbors. This is a form of rule learning, where rules are learned to customize the metric used in the classification process.
Genetic Algorithms.   Explanation: The paper compares different types of hybrid genetic algorithms for solving a seismic data interpretation problem. The traditional hybrid genetic algorithm and the staged hybrid genetic algorithms all use genetic search methods, which fall under the category of genetic algorithms in AI. The paper does not mention any other sub-categories of AI.
Genetic Algorithms.   Explanation: The paper describes an immune system model that uses a genetic algorithm as a central component. The simulation experiments also involve the use of genetic algorithms to explore pattern recognition in the immune system. While the paper does touch on other AI sub-categories such as pattern recognition and learning, the focus is on the use of genetic algorithms.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents a method for accurate representation of high-dimensional unknown functions, which is achieved by recursively splitting the input space in smaller subspaces, while in each of these subspaces a linear approximation is computed. This is a common approach in neural networks, where the input space is divided into smaller regions and a function is learned for each region.   Probabilistic Methods: The paper mentions that the representations of the function at all levels (i.e., depths in the tree) are retained during the learning process, such that a good generalisation is available as well as more accurate representations in some subareas. This suggests that the method uses probabilistic methods to estimate the function in different subspaces, which is a common approach in probabilistic models.
Probabilistic Methods.   Explanation: The paper discusses the equivalence between hidden Markov models and linear Boltzmann chains, both of which are probabilistic models commonly used in time series analysis. The authors specifically mention the use of symbol emission energies and state-state transition energies, which are key components of probabilistic models.
Genetic Algorithms, Reinforcement Learning, Rule Learning.   Genetic Algorithms: The paper presents a coevolutionary approach to learning sequential decision rules, which involves the use of genetic algorithms to evolve sub-behaviors independently.   Reinforcement Learning: The paper discusses the use of reinforcement learning to train the robot in a simulated domain.   Rule Learning: The paper focuses on learning sequential decision rules, which involves the acquisition of rules for decision-making. The coevolutionary approach encourages the formation of stable niches representing simpler sub-behaviors, which can be seen as a form of rule learning.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper presents a new algorithm for the learning of appropriate biases based on previous learning experience. The algorithm is developed for a simple, linear learning system - the LMS or delta rule with a separate learning-rate parameter for each input. The algorithm adjusts the learning-rate parameters, which are an important form of bias for this system.  Reinforcement Learning: The appropriate bias is viewed as the key to efficient learning and generalization. The IDBD algorithm adapts bias based on previous learning experience, making it suitable for drifting or non-stationary learning tasks. The paper shows that the IDBD algorithm performs better than ordinary LMS and finds the optimal learning rates for particular tasks of this type. The IDBD algorithm is also presented as an incremental form of hold-one-out cross-validation.
Neural Networks.   Explanation: The paper focuses on a pruning method for neural networks, specifically on the level of individual network parameters. The paper does not mention any other sub-categories of AI such as case-based reasoning, genetic algorithms, probabilistic methods, reinforcement learning, or rule learning.
Neural Networks - This paper belongs to the sub-category of Neural Networks as it describes the development of a connectionist net simulator called ICSIM, which is object-oriented and designed to simulate and model neural networks. The paper discusses the use of off-the-shelf library classes and customized implementations to create structured and homogeneous neural networks. The paper also mentions the use of a user interface to graphically present the modeled neural networks.
Case Based, Theory.   The paper belongs to the sub-category of Case Based AI because it discusses experience-based reasoning and case-adaptation tasks. It also describes a model-based method for solving non-routine case-adaptation tasks.   The paper also belongs to the sub-category of Theory because it introduces and discusses the concept of generic teleological mechanisms (GTMs) and their use in case adaptation. It also evaluates the computational feasibility and sufficiency of the method proposed.
Rule Learning, Case Based.   Rule Learning is present in the text as the paper compares the performance of control-rule learning systems with case-based reasoning systems. Case Based is also present in the text as the paper specifically analyzes the utility problem in case-based reasoning systems and compares it with control-rule learning systems.
Case Based, Probabilistic Methods  The paper belongs to the sub-category of Case Based AI because it presents a model of similarity-based retrieval that attempts to capture how people judge similarity and analogy when given items to compare. The model uses a pool of memory items and a matcher to filter candidates, which is similar to how a case-based reasoning system works.  The paper also belongs to the sub-category of Probabilistic Methods because the model uses content vectors to estimate how well structured representations will match, which is a probabilistic approach. Additionally, the paper mentions that the model is capable of modeling patterns of access found in psychological data, which suggests that it uses probabilistic methods to make predictions.
Theory.   Explanation: The paper presents an algorithm for on-line learning of linear functions and analyzes its worst-case loss bounds and robustness with respect to noise in the data. It does not involve any specific implementation or application of case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Case Based.   Explanation: The paper discusses a fundamental issue in case-based reasoning, which is similarity assessment. It proposes an approach called constructive similarity assessment, which uses prior cases as a guide to dynamically carve augmented descriptions of new cases out of memory. The paper is focused on improving the process of case-based reasoning, which falls under the sub-category of AI known as Case Based.
Case Based, Reinforcement Learning  Explanation:  - Case Based: The paper discusses a model of memory search strategy learning applied to the problem of retrieving relevant information for adapting cases in case-based reasoning. - Reinforcement Learning: The paper discusses the general requirements for appropriate strategy learning and presents a model of memory search strategy learning, which involves learning through feedback and reinforcement.
This paper belongs to the sub-category of Genetic Algorithms.   Explanation: The paper discusses the Recombination Operator, which is a key component of Genetic Algorithms. The paper explores the correlation between the Recombination Operator and the fitness landscape, which is a fundamental concept in Genetic Algorithms. The paper also discusses the search performance of Genetic Algorithms, which is a key metric for evaluating their effectiveness. Therefore, the paper is primarily focused on Genetic Algorithms and their application in optimization problems.
Probabilistic Methods.   Explanation: The paper discusses a method for approximating probability distributions, which falls under the category of probabilistic methods in AI. The authors specifically mention that this method is useful for analyzing complex systems such as neural networks, which could also be considered a sub-category of AI. However, the focus of the paper is on the probabilistic method itself rather than its application to neural networks, so probabilistic methods is the most relevant sub-category.
Rule Learning.   Explanation: The paper presents an ASOCS model for massively parallel processing of incrementally defined rule systems in areas such as adaptive logic, robotics, logical inference, and dynamic control. The focus is on adaptive algorithm 3 (AA3) and its architecture and learning algorithm. The ASOCS learning algorithms incorporate new rules in a distributed fashion in a short, bounded time. Therefore, the paper is primarily concerned with rule learning.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms: The paper describes a novel search algorithm that borrows ideas from genetic algorithms. The algorithm moves from a coarse-grained search to a fine-grained search of the function space by changing its mutation rate, which is a key feature of genetic algorithms.   Probabilistic Methods: The algorithm uses a diversity-based distance metric to ensure that it searches new regions of the space, which is a probabilistic method.
Genetic Algorithms.   Explanation: The paper explicitly mentions the use of a genetic algorithm for searching for combinations of faults that produce noteworthy performance by the vehicle controller. This falls under the sub-category of AI known as Genetic Algorithms. No other sub-categories of AI are mentioned in the text.
Theory. The paper proposes a new theoretical model of speedup learning and uses it to motivate the notion of "batch problem solving." Theoretical results are then empirically validated in the domain of Eight Puzzle. There is no mention of any of the other sub-categories of AI listed.
Probabilistic Methods.   Explanation: The paper discusses non-parametric density estimation, which is a probabilistic method used to approximate the values of a probability density function. The paper also mentions the use of kernel functions, which are commonly used in probabilistic methods.
Theory  Explanation: This paper presents a theoretical argument about the nature of evolution and how it relates to pre-adaptation. It does not involve any specific AI techniques or applications.
Rule Learning.   Explanation: The paper discusses the problem of learning first-order Horn programs from entailment, which is a subfield of rule learning in artificial intelligence. The paper proposes a method for learning a specific subclass of first-order acyclic Horn programs with constant arity, which involves using equivalence and entailment membership queries and a polynomial-time subsumption procedure. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Theory.
Genetic Algorithms, Neural Networks.   Genetic Algorithms (GAs) are mentioned in the abstract and throughout the paper as a paradigm for solving NP-Complete problems. The paper discusses how GAs can be used to heuristically solve boolean satisfiability (SAT) problems and how any other NP-Complete problem can be transformed into an equivalent SAT problem and solved via GAs.   Neural Networks (NNs) are also mentioned in the abstract and throughout the paper as another paradigm for solving NP-Complete problems. The paper discusses how NNs can be used to heuristically solve SAT problems and how any other NP-Complete problem can be transformed into an equivalent SAT problem and solved via NNs.   Therefore, the paper belongs to the sub-categories of Genetic Algorithms and Neural Networks.
Probabilistic Methods, Reinforcement Learning, Theory.   Probabilistic Methods: The paper discusses the use of Bayesian learning perspective in game theory to the problem of equilibrium selection.   Reinforcement Learning: The paper investigates approaches to learning coordinated strategies in stochastic domains where an agent's actions are not directly observable by others.   Theory: The paper discusses the special problems that arise when actions are not observable, including effects on rates of convergence, and the effect of action failure probabilities and asymmetries. The paper also proposes the use of maximum likelihood as a means of removing strategies from consideration, with the aim of convergence to a conventional equilibrium, at which point learning and deliberation can cease.
Case Based, Rule Learning  Explanation:   This paper belongs to the sub-category of Case Based AI because it discusses how humans transfer their expertise from a familiar domain to a new domain by using generic mechanisms acquired from problem-solving experiences. The paper also mentions recent work in case-based design and how generic mechanisms are one type of abstraction used by designers.  The paper also belongs to the sub-category of Rule Learning because it discusses how generic mechanisms are acquired incrementally from problem-solving experiences in familiar domains by generalization over patterns of regularity. The paper also addresses the issues in generalization from experiences, such as what to generalize from an experience, how far to generalize, and what methods to use.
Genetic Algorithms.   Explanation: The paper discusses the optimization of a single bit string using a (1+1)-Genetic Algorithm, and presents optimal mutation rate schedules for different fitness functions. The entire paper is focused on the use and optimization of genetic algorithms for search and optimization problems.
Genetic Algorithms.   Explanation: The paper specifically focuses on using genetic algorithms to learn navigation and collision avoidance behaviors for robots. The learning is performed under simulation, and the resulting behaviors are then used to control the actual robot. The paper also briefly explains the learning algorithm used, which is based on genetic algorithms. While other sub-categories of AI may be involved in the overall approach, the focus of the paper is on the use of genetic algorithms for learning robot behaviors.
Probabilistic Methods.   Explanation: The paper describes an ITS architecture that explicitly models uncertainty using Bayesian graphical modeling, which is a probabilistic method. The authors mention recent progress in the management of uncertainty in knowledge-based systems, which also points to the use of probabilistic methods.
Neural Networks.   Explanation: The paper compares a neural network algorithm (NNSAT) with a traditional algorithm (GSAT) for solving satisfiability problems. The focus is on the performance of the neural network algorithm, which suggests that it scales better as the number of variables increase. There is no mention of any other sub-category of AI in the text.
Neural Networks.   Explanation: The paper explicitly discusses the use of Artificial Neural Networks in an Artificial Life perspective, and compares them to "classical" neural networks. While other sub-categories of AI may be indirectly related to the topic, Neural Networks is the most directly relevant.
Neural Networks.   Explanation: The paper describes a family of self-organizing neural architectures called VIEWNET that are used for 3-D object learning and recognition from multiple 2-D views. The architecture incorporates a preprocessor and a supervised incremental learning system, both of which are common components of neural networks. The paper also compares the properties of the nodes in VIEWNET with those of cells in monkey inferotemporal cortex, which is a common approach in neural network research.
Neural Networks.   Explanation: The paper discusses various neural network architectures for modelling time-dependent signals, and presents new algorithms for training multilayer perceptrons with FIR filter synapses. The focus is on comparing and benchmarking different neural network algorithms for this specific problem.
Neural Networks, Theory.   Neural Networks: The paper presents a methodology to estimate the optimal number of learning samples and hidden units in function approximation using a feedforward network. The paper analyzes the representation error and generalization error, which are components of the total approximation error, and investigates the approximation accuracy of a feedforward network as a function of the number of hidden units and learning samples.   Theory: The paper introduces an asymptotical model of the error function (AMEF) based on the asymptotical behavior of the approximation error. The paper also analyzes an alternative model of the error function that includes theoretical results about general bounds of approximation. The paper uses these models in combination with knowledge about the computational complexity of the learning rule to find an optimal learning set size and number of hidden units resulting in a minimum computation time for a given desired precision of the approximation.
Probabilistic Methods.   Explanation: The paper proposes a methodology for Bayesian model determination in decomposable graphical Gaussian models, which involves using a hyper inverse Wishart prior distribution on the concentration matrix for each given graph. The authors also implement a reversible jump MCMC sampler for model determination, which is a probabilistic method. The paper discusses the use of prior distributions and characterizes the set of moves which preserve the decomposability of the graph, both of which are common in probabilistic methods.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper proposes a computational model for working memory that involves probabilistic inference, as the authors attribute limited inferential power to relevant suspended goals.   Theory: The paper presents a theoretical framework for understanding opportunity recognition and opportunistic behavior in design. It proposes a model for working memory and compares it with other relevant theories of opportunistic planning.
Neural Networks.   Explanation: The paper investigates the convergence properties of the backpropagation algorithm, which is a commonly used algorithm for training neural networks. The paper specifically looks at how the size and complexity of neural networks affect their ability to generalize to new data.
Neural Networks, Theory.   Neural Networks: The paper primarily focuses on the training of neural networks, specifically the backpropagation algorithm, and how it relates to overfitting and generalization. The authors analyze the behavior of trained networks and compare them to other models, such as polynomial models.   Theory: The paper also delves into relevant theory, outlining the reasons for practical differences in neural network training. The authors discuss the importance of excess degrees of freedom and the bias towards smoother solutions in MLPs. They also suggest future work in creating more parsimonious solutions and improving training algorithms.
Rule Learning, Theory.   Explanation:  - Rule Learning: The paper presents a novel induction algorithm, Rulearner, which induces classification rules using a Galois lattice as an explicit map through the search space of rules. The Rulearner system is also capable of learning both decision lists as well as unordered rule sets.  - Theory: The paper discusses the construction of lattices from data and examines the use of these structures in inducing classification rules. The Rulearner system is shown to compare favorably with commonly used symbolic learning methods which use heuristics rather than an explicit map to guide their search through the rule space. The paper also shows that the Rulearner system is robust in the presence of noisy data.
Case-Based Reasoning. This paper belongs to the sub-category of Case-Based Reasoning in AI. The paper deals with the retrieval of useful cases in case-based reasoning and presents a new search algorithm called Fish and Shrink. The paper focuses on case retrieval and case representation, which are key components of Case-Based Reasoning.
Genetic Algorithms.   Explanation: The paper describes the Parallel Genetic Algorithm (PGA) and its modifications compared to the traditional genetic algorithm. It discusses the use of selection, crossover, and mutation operators to evolve a population of individuals towards a better fitness value. The paper also analyzes the correlation of the fitness landscape for the traveling salesman problem, which is a common problem in genetic algorithms. Therefore, the paper belongs to the sub-category of Genetic Algorithms in AI.
Case Based, Theory  Explanation:  - Case Based: The paper is specifically about case-based learning, which is a subfield of AI that involves solving new problems by adapting solutions from similar past problems (i.e. cases). The paper reviews recent literature on case-based learning and discusses alternative performance tasks and case representations. - Theory: The paper discusses topics in need of additional research, which implies a focus on theoretical aspects of case-based learning. Additionally, the paper highlights the importance of using more expressive case representations, which can involve developing new theoretical frameworks for representing and reasoning with cases.
Case Based, Memory-Based Learning  Explanation: The paper proposes a performance-oriented approach to Natural Language Processing based on automatic memory-based learning of linguistic tasks. This approach is a form of case-based reasoning, where the system learns from examples stored in memory. The term "memory-based" is used throughout the paper to describe this approach. Therefore, the paper belongs to the sub-category of AI known as Case Based.
Probabilistic Methods.   Explanation: The paper proposes a sequential algorithm to optimize a function using stepwise estimators, which involves probabilistic methods such as simulated annealing. The convergence of the algorithm is also proven under mild conditions, which is a common approach in probabilistic methods.
This paper belongs to the sub-category of AI known as Neural Networks.   Explanation: The paper discusses various competitive learning methods, which are a type of unsupervised learning in neural networks. The paper describes how these methods work and provides examples of their applications. Therefore, the paper is most closely related to the sub-category of Neural Networks.
Probabilistic Methods.   Explanation: The paper discusses a model-based imputation procedure for missing data estimation, which involves selecting between different complete data matrices based on an information-theoretic criterion called stochastic complexity. The model class used in this approach is the set of multinomial models with some independence assumptions, which is a probabilistic method for modeling categorical data.
Genetic Algorithms, Tabu Search.   The paper presents a new evolutionary procedure that combines the mechanisms of genetic algorithms and tabu search to solve optimization problems. The adaptation of this search principle to the National Hockey League (NHL) problem is also discussed. Therefore, the paper is primarily related to Genetic Algorithms and Tabu Search.
Probabilistic Methods. This paper discusses the use of probabilistic finite state automata (PFSAs) for modelling behavioural data, and evaluates possible hypotheses using the Minimum Message Length (MML) measure. The paper also discusses the use of Fogel's Evolutionary Programming for producing globally optimal PFSA models. While Genetic Algorithms are mentioned briefly, they are not the focus of the paper. The other sub-categories of AI (Case Based, Neural Networks, Reinforcement Learning, Rule Learning, Theory) are not mentioned in the text.
Rule Learning, Theory.   The paper discusses the use of inductive learning, which involves inferring rules or patterns from data. This falls under the category of Rule Learning. The paper also proposes a theoretical framework for selecting minimal complexity representations, which falls under the category of Theory.
Probabilistic Methods. This paper belongs to the sub-category of Probabilistic Methods in AI. The paper uses Bayesian framework and reversible jump Markov chain Monte Carlo (MCMC) methods to address the problem of model order uncertainty in autoregressive (AR) time series. The full conditional density for the AR parameters is obtained analytically, and efficient model jumping is achieved by proposing model space moves from it. The paper also compares this method with an alternative method, which proposes moves only for the new parameters in each move.
Case Based, Planning  Explanation:  The paper belongs to the sub-category of Case Based AI because it proposes the applicability of case-based planning methodology to the task of planning to learn. The authors argue that relatively simple, fine-grained primitive inferential operators are needed to support flexible planning, which is a characteristic of planning AI.
Case Based, Theory  Explanation:  - Case Based: The paper presents a PAC analysis of a case-based reasoning algorithm called V S-CBR. It discusses how the algorithm collects cases and adjusts a weighted similarity measure.  - Theory: The paper applies the PAC learning framework to analyze the hypothesis spaces of the learner on different target concepts and explores the constituent parts of the instance-based learner. It also discusses the overall behavior of the algorithm in relation to its constituent parts.
Rule Learning, Theory.   Rule Learning is present in the text as the paper discusses the discovery of partial determinations, which can be seen as rules that describe dependencies between attributes in a relation.   Theory is also present in the text as the paper proposes modifications to a known MDL formula for evaluating partial determinations and describes an efficient preprocessing-based approach for handling numerical attributes. These modifications and approaches are based on theoretical considerations and aim to improve the performance of the algorithm.
Probabilistic Methods, Genetic Algorithms  The paper belongs to the sub-category of Probabilistic Methods as it discusses the use of a minimum message length estimator as a fitness function for evaluating candidate explanations during the search for a near-optimal explanation. It also mentions that explanations with uneven distributions of frequencies on transitions from a node will be preferred, suggesting a probabilistic approach.  The paper also belongs to the sub-category of Genetic Algorithms as it discusses the use of Evolutionary Programming for finding satisfactory approximately optimal explanations. The information theoretic measure of finite state machine explanations is used as the fitness function during the search for a near-optimal explanation, which is a common approach in Genetic Algorithms.
Genetic Algorithms.   Explanation: The paper primarily focuses on using a genetic algorithm to evolve cellular automata and study emergent coordination behavior. The authors describe in detail the evolutionary process and the solutions discovered by the GA. While other sub-categories of AI may also be relevant to the study of emergent behavior, such as neural networks or reinforcement learning, they are not the primary focus of this paper.
Theory.   Explanation: The paper presents a theoretical result, a general bootstrap theorem, for Z estimators with possibly infinite-dimensional parameter spaces. The paper does not discuss any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods.   Explanation: The paper introduces the Survival Curve RSA (SC-RSA) method, which is a probabilistic approach for predicting recurrence times in medical domains. The method uses censored input data and produces accurate predicted rates of recurrence while maintaining accuracy on individual predicted recurrence times. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods.   Explanation: The paper discusses the "query by committee" algorithm, which is a method for filtering informative queries from a random stream of inputs. The algorithm is based on probabilistic methods, specifically Bayesian Learning, which is mentioned in the keywords. The paper also discusses the prediction error decreasing exponentially with the number of queries, which is a probabilistic result.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper proposes a new method for performing a nonlinear form of Principal Component Analysis (PCA) using integral operator kernel functions. This method involves computing principal components in high-dimensional feature spaces related to input space by some nonlinear map. The paper also presents experimental results on polynomial feature extraction for pattern recognition.  Neural Networks: The method proposed in the paper involves computing principal components in high-dimensional feature spaces related to input space by some nonlinear map. This is similar to the concept of neural networks, where input data is transformed through nonlinear activation functions to produce output. Additionally, the paper mentions using the method for pattern recognition, which is a common application of neural networks.
Probabilistic Methods. This paper belongs to the sub-category of Probabilistic Methods in AI. The paper discusses the use of graphical decision-modeling formalisms such as belief networks and influence diagrams, which provide a compact representation of probabilistic relationships and support inference algorithms that automatically exploit the dependence structure in such models. The paper also discusses the limitations of these graphical decision models and proposes a knowledge-based model construction approach to generate decision models dynamically at run-time based on the problem description and information received thus far.
Case Based, Reinforcement Learning, Rule Learning  Case Based: The paper proposes a new approach to predicting a given example's class by locating it in the "example space" and then choosing the best learner(s) in that region of the example space to make predictions. This approach is based on past performance of learners in that region, which can be seen as a form of case-based reasoning.  Reinforcement Learning: The paper mentions "dynamic weighting" of learners based on their regional accuracy, which can be seen as a form of reinforcement learning where the weights are updated based on the performance of the learners.  Rule Learning: The paper mentions the use of a rule learner, CN2, as one of the constituent learners in the meta-learning strategies being compared. This indicates the presence of rule learning in the paper.
The sub-category of AI that this paper belongs to is Rule Learning.   Explanation:  The paper describes a program, called Marvin, that uses concepts it has learned previously to learn new concepts. The program forms hypotheses about the concept being learned and tests the hypotheses by asking the trainer questions. The program determines which objects in the example belong to concepts stored in the memory. A description of the new concept is formed by using the information obtained from the memory to generalize the description of the training example. The generalized description is tested when the program constructs new examples and shows these to the trainer, asking if they belong to the target concept. This process involves the use of rules to generalize and test hypotheses about the new concept being learned. Therefore, the sub-category of AI that this paper belongs to is Rule Learning.
Genetic Algorithms, Neural Networks.   Genetic algorithms are used in the paper to model the evolutionary process of the population. The paper states, "A genetic algorithm using endogenous fitness and local selection is used to model the evolutionary process."   Neural networks are used to model the individuals in the population, with variations in their behaviors related to interactions with varying environments. The paper states, "Individuals in the population are modeled by neural networks with simple sensory-motor systems, and variations in their behaviors are related to interactions with varying environments."
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of neural networks for creating distributed representations of words and phrases. It also mentions the use of neural networks for modeling compositional structure.  Probabilistic Methods: The paper discusses the use of probabilistic models for language modeling and for creating distributed representations. It also mentions the use of probabilistic models for modeling the uncertainty in compositional structure.
Rule Learning.   Explanation: The paper investigates the efficiency of subsumption, which is the basic provability relation in ILP (Inductive Logic Programming). The paper discusses different restrictions of the subsumption problem and proposes efficient algorithms for certain cases. ILP is a subfield of machine learning that focuses on learning rules from examples, making this paper most closely related to the Rule Learning subcategory of AI.
Genetic Algorithms.   Explanation: The paper discusses a variation of genetic programming called "strongly typed" genetic programming (STGP), which is a type of genetic algorithm. The paper explains how STGP addresses the limitation of "closure" in traditional genetic programming, and introduces key concepts such as generic functions and generic data types. The examples presented in the paper involve manipulating vectors, matrices, and lists, which are all common data structures used in genetic programming.
Neural Networks.   Explanation: The paper discusses the limitations of the Recurrent Cascade Correlation (RCC) Network, which is a type of recurrent neural network. The proof presented in the paper shows that the RCC network cannot model certain finite-state automata, regardless of the transfer function used by its units. Therefore, the paper falls under the sub-category of Neural Networks in AI.
Rule Learning, Theory  Explanation:  - Rule Learning: The paper discusses subsumption, which is a key concept in rule learning. The authors propose an algorithm for testing subsumption of determinate clauses, which is important in inductive logic programming (a subfield of rule learning). - Theory: The paper presents theoretical results on the complexity of subsumption testing and its relation to the clique problem. The authors also propose a pruning rule based on prior knowledge, which is a theoretical contribution.
Neural Networks, Theory.  Explanation:  - Neural Networks: The paper introduces a new algorithm designed to learn sparse perceptrons, which are a type of neural network. The algorithm is based on a hypothesis-boosting method, which is a common technique in neural network learning.  - Theory: The paper discusses the theoretical properties of the algorithm, including its ability to PAC-learn a natural class of target concepts. The authors also provide a theoretical analysis of the algorithm's performance.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper proposes to use RELI-EFF, an extension of RELIEF, for heuristic guidance of inductive learning algorithms. The authors also mention that they have reimplemented Assistant, a system for top down induction of decision trees, using RELIEFF as an estimator of attributes at each selection step.   Probabilistic Methods are present in the text as the paper discusses the limitations of current inductive machine learning algorithms that use myopic impurity functions and limited look-ahead, which prevent them from detecting significant conditional dependencies between the attributes that describe training objects. The proposed approach using RELIEFF aims to overcome this myopia and improve the accuracy of inductive learning algorithms.
Reinforcement Learning.   Explanation: The paper is specifically about a method for hierarchical reinforcement learning, and the MAXQ method is a type of value function decomposition used in reinforcement learning. The other sub-categories of AI listed (Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Rule Learning, Theory) are not directly relevant to the content of the paper.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses the concept of causality in Genetic Programming (GP) and how it can be used to adapt control parameters for speeding up GP search. It analyzes the effects of crossover and selection in GP, which are both fundamental operators in Genetic Algorithms.  Theory: The paper presents a theoretical analysis of the concept of causality in GP and its correlation to search space exploitation. It also discusses new developments in GP architecture evolution from the causality perspective.
Probabilistic Methods, Rule Learning.   Probabilistic Methods are present in the paper through the use of probabilistic estimates in conjunction with no-pruning, which improves the performance of Bagging. The paper also discusses the use of weight perturbations (Wagging), which is a probabilistic method.   Rule Learning is present in the paper through the use of a decision tree inducer and the comparison of different variants of the algorithm. The paper also measures tree sizes and shows a correlation between the increase in tree size and the success in reducing error.
Theory. The paper discusses the evolution of the formal conception of rationality and its relationship to the informal conception of intelligence, with the goal of improving practical and theoretical research in AI. There is no mention of any specific sub-category of AI such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods, Reinforcement Learning  Probabilistic Methods: The paper discusses the use of probabilistic models in the RISE 1.0 learning system, specifically Bayesian networks and Markov models. The author explains how these models are used to represent uncertainty and make predictions about future events.  Reinforcement Learning: The paper also discusses the use of reinforcement learning in the RISE 1.0 learning system. The author explains how the system uses a reward function to guide the learning process and improve performance over time. The paper also describes how the system uses Q-learning to learn optimal policies in a Markov decision process.
Case Based, Neural Networks  Explanation:  - Case Based: This paper discusses a method for retrieving similar items from memory, which is a key aspect of case-based reasoning.  - Neural Networks: The paper describes Holographic Reduced Representations, which are a type of distributed representation used in neural networks.
Theory. The paper discusses the US-L* algorithm, which is a theoretical algorithm for learning finite automata from prefix-closed samples. The experiments conducted in the paper are aimed at testing the algorithm's performance on random prefix-closed samples, but the focus is on the theoretical aspects of the algorithm rather than its practical applications.
Neural Networks, Reinforcement Learning  The paper belongs to the sub-categories of Neural Networks and Reinforcement Learning.   Neural Networks: The paper discusses the use of a specific type of neural network, called AA1, and analyzes its convergence and generalization properties. The authors also compare the performance of AA1 with other neural network models.  Reinforcement Learning: The paper uses reinforcement learning to train the AA1 neural network. The authors describe the use of a reward function to guide the learning process and evaluate the performance of the model on a variety of tasks.
Rule Learning, Theory.   The paper discusses the statistical bias and variance of decision tree algorithms, which fall under the category of rule learning. The paper also presents theoretical concepts and methods for measuring and reducing bias and variance in machine learning algorithms.
Reinforcement Learning.   Explanation: The paper discusses the use of macro-actions in reinforcement learning algorithms and analyzes their effect on learning. The focus is on improving the speed and scaling of reinforcement learning, which is a sub-category of AI.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as the authors present a new approach to reinforcement learning using hierarchies of machines. The paper also falls under the category of Theory, as it presents provably convergent algorithms for problem-solving and learning with hierarchical machines.
Case Based, Explanation-Based Learning.   The paper is primarily focused on case-based planning and improving retrieval of previous cases. It also utilizes explanation-based learning techniques to detect and construct reasons for case failure. There is no mention of genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. The paper does not fit under the category of Theory as it is focused on practical implementation and empirical study.
Neural Networks, Probabilistic Methods.  Neural Networks: The paper focuses on the statistical evaluation of experiments conducted on neural networks. It discusses the minimum requirements for conducting such experiments and the current practices in the field.  Probabilistic Methods: The paper discusses the use of statistical methods for evaluating neural network experiments. It emphasizes the importance of using appropriate statistical techniques to draw meaningful conclusions from the experimental results.
Neural Networks.   Explanation: The paper presents new methods for training large neural networks for phoneme probability estimation. The architecture used combines timedelay windows and recurrent connections to capture the important dynamic information of the speech signal. The paper explores schemes for sparse connection and connection pruning in fully connected recurrent networks. The networks are evaluated in a hybrid HMM/ANN system for phoneme recognition on the TIMIT database and for word recognition on the WAXHOLM database. Therefore, the paper is primarily focused on the use of neural networks for speech recognition.
Probabilistic Methods.   Explanation: The paper discusses bagging, a technique that involves constructing multiple models from bootstrap samples of a database and combining them by uniform voting. The paper then tests two alternative explanations for why bagging works, both based on Bayesian learning theory. The paper concludes that bagging works because it effectively shifts the prior to a more appropriate region of model space. The use of Bayesian learning theory and probability concepts throughout the paper makes it clear that it belongs to the sub-category of Probabilistic Methods.
Neural Networks, Theory.   Neural Networks: The paper proposes an algorithm called "query by committee" that involves training a committee of students (which can be seen as a type of neural network) on the same data set.   Theory: The paper discusses the theoretical properties of the algorithm, including its information gain and generalization error as the number of queries goes to infinity. The authors suggest that asymptotically finite information gain may be an important characteristic of good query algorithms.
The paper does not belong to any sub-category of AI as it is a technical report from the Department of Statistics and the Department of Computer Science, and does not discuss any specific AI techniques or applications.
Neural Networks, Probabilistic Methods, Theory.   Neural Networks: The paper discusses the use of non-linear `infomax' algorithm applied to an ensemble of natural scenes to produce sets of visual filters that resemble those produced by the sparseness-maximisation network of Olshausen & Field (1996). It also mentions the similarity of the resulting filters with the receptive fields of simple cells in visual cortex, which suggests that these neurons form an information-theoretic co-ordinate system for images.  Probabilistic Methods: The paper discusses the use of Independent Components Analysis (ICA) to compare the resulting filters and their associated basis functions with other decorrelating filters produced by Principal Components Analysis (PCA) and zero-phase whitening filters (ZCA). It also mentions that the outputs of these filters are as independent as possible, since the info-max network is able to perform ICA.  Theory: The paper discusses the theoretical concepts of sparse, distributed representation of natural scenes, unsupervised learning algorithm that attempts to find a factorial code of independent visual features, and the emergence of responses from such algorithms. It also mentions the reasoning of Barlow (1989) that such responses should emerge from an unsupervised learning algorithm that attempts to find a factorial code of independent visual features.
Probabilistic Methods.   Explanation: The paper discusses the use of several model selection techniques for logistic regression, which is a probabilistic method. The techniques mentioned, such as Occam's Window and Bayesian Random Searching, are all probabilistic in nature and involve making probabilistic assumptions about the data and the models being considered. The paper does not mention any other sub-categories of AI, such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Reinforcement Learning, Probabilistic Methods, Theory.   Reinforcement learning is the main focus of the paper, as the authors develop an approach to learning, planning, and representing knowledge based on the mathematical framework of reinforcement learning and Markov decision processes (MDPs).   Probabilistic methods are also present, as the authors discuss stochastic options and the use of the theory of semi-Markov decision processes (SMDPs) to model the consequences of options.   Finally, the paper falls under the category of Theory, as it proposes a novel framework for representing and organizing knowledge at multiple levels of temporal abstraction. The authors introduce new concepts such as options, intra-option temporal-difference methods, and subgoals, and show how they can be used to improve existing methods.
Neural Networks. This paper belongs to the sub-category of Neural Networks as it examines the experimental evaluations of neural network learning algorithms. The paper discusses the need for better assessment practices in this field and suggests the development of easily accessible collections of benchmark problems.
Genetic Algorithms.   Explanation: The paper is specifically focused on analyzing the role of developmental mechanisms in genetic algorithms (GAs). It provides a framework for distinguishing between two developmental mechanisms (learning and maturation) and how they can affect the dynamics of the GA. The paper does not discuss any other sub-categories of AI.
Genetic Algorithms.   Explanation: The paper is solely focused on explaining the concept and implementation of Genetic Algorithms, which is a sub-category of AI. The author provides a detailed tutorial on how to use Genetic Algorithms to solve optimization problems. The paper does not discuss any other sub-category of AI.
Genetic Algorithms.   Explanation: The paper explicitly mentions the use of genetic programming to find optimal monitoring strategies. The difficulties encountered in evolving these strategies using genetic algorithms are also discussed. No other sub-category of AI is mentioned or implied in the text.
Genetic Algorithms, Rule Learning.   Genetic Algorithms are mentioned in the text as the basis for the simulated breeding technique used to evaluate the qualities of offspring generated by genetic operations. Rule Learning is also mentioned as the method used to acquire simple decision rules from the data.
Genetic Algorithms, Inductive Logic Programming, Rule Learning.   Genetic Algorithms and Inductive Logic Programming are both explicitly mentioned in the title and abstract of the paper as the two approaches being compared. Rule Learning is also relevant as the paper is focused on inducing recursive list-manipulation functions, which can be seen as learning rules for manipulating lists.
Case Based.   Explanation: The paper discusses two major problems in case-based reasoning, which is a subfield of AI that involves solving new problems by adapting solutions from similar past problems (i.e., cases). The paper proposes a solution-relevant abstraction to improve the retrieval and adaptation of source cases for analogical theorem proving by induction.
Case Based, Rule Learning  Explanation:  - Case Based: The paper discusses the use of case-based reasoning in domains such as architectural design and law. It also mentions the retrieval of past cases to generate solutions for actual problems. - Rule Learning: The paper presents a general approach to structural similarity assessment and adaptation, which involves limited domain knowledge to support design tasks. This approach can be seen as a form of rule learning, where rules are derived from past cases to generate adapted solutions.
Neural Networks.   Explanation: The paper discusses a new algorithm for training multi-layer perceptrons using the natural gradient learning rule, which is a technique commonly used in neural network training. The paper also mentions the complexity of the algorithm in terms of the input dimension and number of hidden neurons, which are both characteristics of neural networks.
Case Based, Rule Learning.   The paper focuses on case-based reasoning (CBR) systems and their ability to adapt cases to novel situations, which falls under the category of Case Based AI. The hybrid approach described in the paper combines rule-based reasoning with CBR, which falls under the category of Rule Learning AI.
Reinforcement Learning, Probabilistic Methods  Reinforcement Learning is the primary sub-category of AI that this paper belongs to. The paper discusses the limitations of traditional reinforcement learning methods and proposes a methodology for designing the representation and the reinforcement functions that take advantage of implicit domain knowledge in order to accelerate learning in dynamic, situated multi-agent domains characterized by multiple goals, noisy perception and action, and inconsistent reinforcement.   Probabilistic Methods are also present in the paper as the authors discuss the noisy perception and action in multi-agent domains and propose a methodology that takes into account the uncertainty in the environment.
Theory.   Explanation: This paper focuses on the theoretical aspects of learning and problem solving in AI, rather than specific algorithms or methods. It discusses different models of problem solving and their implications for learning, and proposes a structure-behavior-function model as a way to represent problem solvers and enable reflection. While some specific techniques such as Autognostic are mentioned, the paper is primarily concerned with the conceptual framework underlying problem solving and learning.
Case Based, Theory  Explanation:  - Case Based: The paper discusses planning by retrieving and adapting past planning cases, which is a key aspect of case-based reasoning. - Theory: The paper presents a framework for mixed-initiative planning that combines generative and case-based planning, and discusses the challenges and solutions for incorporating human users into this process. This framework and its implementation can be seen as a theoretical contribution to the field of AI planning.
Genetic Algorithms, Probabilistic Methods, Theory.   Genetic Algorithms: The paper discusses Evolutionary Programming and Evolution Strategies, which are both probabilistic optimization algorithms based on the model of organic evolution. This model is a key component of genetic algorithms.  Probabilistic Methods: The paper focuses on two probabilistic optimization algorithms, Evolutionary Programming and Evolution Strategies, and discusses their performance in experimental runs. Theoretical results on global convergence and convergence rate theory are also presented.  Theory: The paper presents theoretical results on global convergence, step size control for a strictly convex, quadratic function, and an extension of the convergence rate theory for Evolution Strategies. These theoretical results are discussed with respect to their implications on Evolutionary Programming.
Reinforcement Learning, Theory.   Reinforcement learning is present in the paper as the control law is constructed by solving a two player zero sum differential game on a moving horizon, which is a common approach in reinforcement learning.   Theory is also present in the paper as it provides conditions under which the controller results in a stable system and satisfies an infinite horizon H 1 norm bound. The paper also uses a risk sensitive formulation to provide a state estimator in the observation feedback case, which is a theoretical approach.
Genetic Algorithms.   Explanation: The paper investigates genetic algorithms with multi-parent recombination, and performs experiments to observe the effect of different numbers of parents on optimizing various problems. The entire paper is focused on genetic algorithms and their variations, making it most closely related to the sub-category of Genetic Algorithms within AI.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses the use of genetic algorithms in optimization problems and proposes a modification to the traditional approach by incorporating genetic information from the problem domain into the algorithm. The authors argue that this approach can lead to better solutions and faster convergence.   Theory: The paper presents a theoretical framework for incorporating genetic information into genetic algorithms and discusses the implications of this approach for optimization problems. The authors also provide experimental results to support their claims.
Neural Networks, Probabilistic Methods.   Neural Networks: The proposed method is implemented based on radial basis function networks, which are a type of neural network.  Probabilistic Methods: The paper proposes a generalized finite mixture model as a linear combination scheme, which is a probabilistic method. The learning algorithm used for training the linear combination scheme is based on Expectation-Maximization (EM) algorithm, which is also a probabilistic method. The paper also mentions "multiple probabilistic classifiers" in the title, indicating the use of probabilistic methods in the classifiers being combined.
Probabilistic Methods, Reinforcement Learning  Probabilistic Methods: The paper discusses the use of probabilistic models to evaluate student performance and adapt the learning space accordingly. It mentions the use of Bayesian networks to model student knowledge and predict their performance on future tasks.  Reinforcement Learning: The paper proposes the use of reinforcement learning to adapt the learning space based on student performance. It suggests using a reward system to encourage students to engage with the material and improve their understanding. The paper also discusses the use of reinforcement learning to optimize the selection of learning resources for each student.
Neural Networks.   Explanation: The paper discusses the Priority ASOCS model, which is a type of adaptive network composed of simple computing elements operating asynchronously and in parallel. The model is used for learning and generalization, which are key components of neural networks. The paper also discusses the need for multiple styles of generalization, which is a common topic in neural network research.
Theory  Explanation: The paper introduces a new approach to model selection that is based on exploiting the intrinsic metric structure of a hypothesis space, as determined by the natural distribution of unlabeled training patterns. The paper does not use any specific AI sub-category such as neural networks, probabilistic methods, or reinforcement learning. Instead, it focuses on developing a theoretical framework for model selection that can be applied to most function learning tasks. Therefore, the paper belongs to the Theory sub-category of AI.
Genetic Algorithms, Rule Learning.   Genetic Algorithms: The paper uses a genetic algorithm to evolve a set of classification rules with real-valued attributes. The authors view supervised classification as an optimization problem and evolve rule sets that maximize the number of correct classifications of input instances. They also use a variant of the Pitt approach to genetic-based machine learning system with a novel conflict resolution mechanism between competing rules within the same rule set.  Rule Learning: The paper focuses on evolving rule sets for classification using a genetic algorithm. The authors present a new uniform method for representing don't cares in the rules and use a conflict resolution mechanism between competing rules within the same rule set. The goal is to maximize the number of correct classifications of input instances.
Genetic Algorithms, Rule Learning.   Genetic Algorithms: The paper describes an approach to using genetic algorithms to solve a specific problem, namely learning a rule base to adapt the parameters of an image processing operator path. The paper discusses the limitations of classic genetic operators and proposes the use of high-level genetic operators to overcome these limitations.  Rule Learning: The paper specifically focuses on learning a rule base to adapt the parameters of an image processing operator path. The approach described in the paper involves integrating task-specific but domain-independent knowledge to guide the use of genetic operators. This knowledge is used to define rules that govern the use of the genetic operators.
Probabilistic Methods, Rule Learning, Theory.   Probabilistic Methods: The paper discusses using Bayesian probability theory to combine classifications of individual concept descriptions.   Rule Learning: The paper focuses on learning multiple concept descriptions (rule sets) for each class in the data, using the HYDRA algorithm.   Theory: The paper presents experimental evidence and analysis of the effectiveness of learning multiple concept descriptions in "flat" hypothesis spaces. It also discusses the optimal strategy for combining classifications using Bayesian probability theory.
Rule Learning, Theory.   Explanation:   This paper belongs to the sub-category of Rule Learning because it discusses decision trees and what should be minimized in them. Decision trees are a common tool used in rule learning, where the goal is to learn a set of rules that can accurately classify new instances based on their attributes. The paper examines different criteria for minimizing decision trees, such as information gain and gain ratio, which are commonly used in rule learning algorithms.  The paper also belongs to the sub-category of Theory because it presents a theoretical analysis of decision tree algorithms and their performance. The authors derive bounds on the expected error of decision trees under different criteria for minimizing them, and compare these bounds to the actual error rates observed on real-world datasets. This type of theoretical analysis is an important aspect of AI research, as it helps to understand the limitations and strengths of different algorithms and guide the development of new ones.
Reinforcement Learning.   Explanation: The paper presents a novel multi-agent learning paradigm called team-partitioned, opaque-transition reinforcement learning (TPOT-RL). The paper discusses the use of reinforcement learning techniques to enable teams of agents to learn effective policies with very few training examples even in the face of a large state space with large amounts of hidden state. The paper presents the algorithmic details of TPOT-RL as well as empirical results demonstrating the effectiveness of the developed multi-agent learning approach with learned features.
Neural Networks.   Explanation: The paper discusses a method for training multilayer perceptron networks, which are a type of neural network. The focus of the paper is on the effects of using multiple node types within the DMP framework, which is a specific type of neural network. The simulation results show that DMP2 performs favorably in comparison with other learning algorithms, which are also likely to be neural network-based.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses the use of genetic programming (GP) to automate the specification refinement process. GP is a type of genetic algorithm that uses evolutionary principles to generate solutions to problems.   Theory: The paper is focused on the theoretical aspect of program derivation and specification refinement. It discusses a well-known proof logic for program derivation and how it can be encoded for use in a GP-based system. The goal of the research is to determine if GP can be used to automate the specification refinement process, which is a theoretical problem.
Genetic Algorithms.   Explanation: The paper discusses an extension of Genetic Programming called Strongly Typed Genetic Programming (STGP), which is a type of Genetic Algorithm. The paper specifically focuses on extending STGP by allowing for type inheritance, which is a modification to the genetic operators used in the algorithm. While the paper does touch on some theoretical aspects of STGP and type inheritance, the primary focus is on the practical implementation and experimentation with the extended algorithm. Therefore, while there may be some overlap with other sub-categories of AI, Genetic Algorithms is the most closely related.
Genetic Algorithms, Collective Intelligence.   Genetic Algorithms: The paper discusses the integration of distributed search of genetic programming based systems with collective memory to form a collective adaptation search method.   Collective Intelligence: The paper proposes a collective adaptation search method where search agents gather knowledge of their environment and deposit it in a central information repository. Process agents are then able to manipulate that focused knowledge, exploiting the exploration of the search agents. This is an example of collective intelligence, where the knowledge of multiple agents is combined to improve the search process.
Case Based, Constraint Satisfaction Problem (CSP)  Explanation:  - Case Based: The paper discusses the integration of case-based reasoning (CBR) with the constraint satisfaction problem (CSP) formalism.  - Constraint Satisfaction Problem (CSP): The paper focuses on the integration of CBR with CSP, which is a subfield of AI that deals with finding solutions to problems by satisfying a set of constraints.
Rule Learning, Theory.   The paper discusses the problem of small in concept learning, which is a subfield of machine learning that focuses on learning rules or decision boundaries from data. The authors propose a theoretical framework for understanding the problem of small, which involves analyzing the complexity of the hypothesis space and the sample size required for learning. This framework falls under the category of theory in AI. Additionally, the paper discusses various rule learning algorithms and their limitations in dealing with small data sets, which falls under the category of rule learning.
Rule Learning, Theory.   Explanation:  The paper discusses the performance ranking of machine learning algorithms on benchmark datasets, and how the adaptation of domain-specific parameters can cause an optimistic bias in the ranking. The paper then quantifies this bias and demonstrates how unbiased ranking experiments should be conducted. This falls under the sub-category of Rule Learning, which deals with the induction of rules from data. Additionally, the paper presents a theoretical analysis of the bias in the ranking process, which falls under the sub-category of Theory.
Rule Learning, Theory.   Explanation: The paper belongs to the sub-category of Rule Learning as it discusses the construction of decision trees and their accuracy on test data. It also belongs to the sub-category of Theory as it investigates the properties of the set of consistent decision trees and the factors that affect the accuracy of individual trees. The paper does not relate to the other sub-categories of AI mentioned in the question.
Neural Networks, Rule Learning.   Neural Networks: The paper evaluates Bagging and Boosting methods using neural networks as one of the classification algorithms. The paper also discusses the performance of a baseline neural-network ensemble method.  Rule Learning: The paper discusses the use of decision trees as a classification algorithm and evaluates Bagging and Boosting methods using decision trees. Decision trees are a type of rule-based learning algorithm.
Rule Learning, Theory.   Explanation: The paper discusses the process of building decision trees and compares two approaches - pruning and averaging. This falls under the sub-category of Rule Learning. The paper also presents an empirical comparison of the two approaches, which involves analyzing the performance of the decision trees. This falls under the sub-category of Theory.
Probabilistic Methods.   Explanation: The paper presents a novel method of characterizing the activity of cell assemblies in the brain using a Hidden Markov Model, which is a probabilistic method. The model is used to identify the behavioral mode of the animal and directly identify the corresponding collective network activity. The segmentation of the data into discrete states also provides direct evidence for the state dependency of the short-time correlation functions between the same pair of cells, which is another example of probabilistic modeling.
Probabilistic Methods.   The paper discusses model selection and accounting for model uncertainty in linear regression models using Bayesian model averaging, which is a probabilistic method. The authors use Bayesian methods to estimate the posterior distribution of model parameters and to calculate model probabilities. They also discuss the use of information criteria, such as the Bayesian Information Criterion (BIC), which are based on probabilistic principles.
Probabilistic Methods.   Explanation: The paper discusses the use of Bayesian graphical models for analyzing discrete data. Bayesian methods are a type of probabilistic method that use probability distributions to model uncertainty and make predictions. The authors also acknowledge several experts in the field of Bayesian statistics, indicating that this paper is closely related to this sub-category of AI.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper describes an ensemble of simple feed-forward neural networks that are used to rate each of the images and generate a score for each emotion. The networks were trained on a database of face images that human subjects consistently rated as portraying a single emotion.  Probabilistic Methods: The paper discusses the model's response to limit and direct testing in determining if human subjects exhibit categorical perception in morph image sequences. This involves analyzing a linear sequence of morph images and observing sharp transitions in the output response vector, which suggests the presence of probabilistic methods in the model's approach.
Probabilistic Methods, Neural Networks  Probabilistic Methods: The paper discusses the use of an adaptive filter architecture and ideal inverses of acquired room impulse responses to compare the effectiveness of different-sized separating filter configurations of various filter lengths. These methods involve probabilistic modeling and estimation.  Neural Networks: The paper uses a multi-channel blind least-mean-square algorithm (MBLMS) to improve upon the separation of signals mixed with real-world filters. MBLMS is a neural network-based algorithm that uses a linear filter to separate the sources.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper discusses the adaptation of Rissanen's Minimum Description Length (MDL) principle to handle continuous attributes in the Inductive Logic Programming setting. This involves the creation of rules to handle the continuous attributes.   Probabilistic Methods are present in the text as the MDL pruning mechanism developed in the paper involves calculating the probability of a model given the data and the model complexity. This probability is used to determine which models to prune, with the goal of producing more comprehensible models while retaining their performance.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of recurrent (IIR) networks for blind separation of delayed and convolved sources. It also mentions the use of a feedforward architecture in the frequency domain for real-room separation. Both of these approaches involve the use of neural networks.  Probabilistic Methods: The paper mentions the use of Natural Gradient information maximisation rules for blind separation of mixed signals. This approach involves probabilistic methods for adjusting delays, separating, and deconvolving mixed signals.
Probabilistic Methods.   Explanation: The paper discusses the use of the Fisher information matrix as the Riemannian metric tensor for the parameter space in blind source separation, which is a probabilistic method for separating mixed signals. The paper also describes the steepest descent algorithm to maximize the likelihood function in this Riemannian parameter space, which is a common approach in probabilistic modeling.
Neural Networks.   Explanation: The paper discusses a new scheme to represent the Fisher information matrix of a stochastic multi-layer perceptron and an algorithm to compute the inverse of the Fisher information matrix. The inverse of the Fisher information matrix is used in the natural gradient descent algorithm to train single-layer or multi-layer perceptrons. Therefore, the paper is primarily focused on neural networks.
Probabilistic Methods.   Explanation: The paper describes a framework for place learning that represents distinct places as evidence grids, which are a probabilistic description of occupancy. The approach to place recognition relies on nearest neighbor classification, augmented by a registration process to correct for translational differences between the two grids. The learning mechanism is lazy in that it involves the simple storage of inferred evidence grids. The paper also discusses experimental studies with physical and simulated robots that suggest the approach improves place recognition with experience, can handle significant sensor noise, benefits from improved quality in stored cases, and scales well to environments with many distinct places. While the paper does not explicitly mention other sub-categories of AI, it is clear that probabilistic methods are the primary focus of the research.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms: The paper describes how the simulated evolution of a population of non-deterministic incremental algorithms offers a new approach for exploration of a state space, compared to other techniques like Genetic Algorithms (GA), Evolutionary Strategies (ES) or Hill Climbing. The END model presented in the paper is a genetic algorithm that evolves non-deterministic incremental algorithms.  Probabilistic Methods: The non-deterministic incremental algorithm described in the paper selects choices non-deterministically, which implies the use of probabilistic methods. The state space can be represented as a tree, and a solution is a path from the root of that tree to a leaf, which involves probabilistic exploration of the state space. The END model presented in the paper is a probabilistic method that evolves non-deterministic incremental algorithms.
Neural Networks.   Explanation: The paper describes a neural network system called VISOR that learns visual schemas from examples and processes information through cooperation, competition, and parallel bottom-up and top-down activation of schema representations. The paper then uses this neural network system to simulate and analyze various perceptual phenomena. While other sub-categories of AI may also be relevant to the study of visual perception and learning, the focus of this paper is on the use of a neural network model.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper describes a novel approach to object recognition and scene analysis based on neural network representation of visual schemas. The schema hierarchy is learned from examples through unsupervised adaptation and reinforcement learning.  Reinforcement Learning: The paper mentions that the schema hierarchy is learned through unsupervised adaptation and reinforcement learning. The VISOR system learns that some objects are more important than others in identifying the scene, and that the importance of spatial relations varies depending on the scene. As the inputs differ increasingly from the schemas, VISOR's recognition process is remarkably robust, and automatically generates a measure of confidence in the analysis.
Neural Networks.   Explanation: The paper focuses on training feedforward neural networks with binary weights, and proposes constructive training methods to improve their performance. The entire paper is dedicated to discussing the implementation and evaluation of these methods in the context of neural networks. While other sub-categories of AI may be relevant to the topic of neural network training, they are not explicitly discussed in this paper.
Genetic Algorithms, Rule Learning.   Genetic Algorithms are present in the text as the learning algorithm used by SAMUEL to learn reactive behaviors for autonomous agents. The paper describes how SAMUEL uses a genetic algorithm to automate the process of creating stimulus-response rules and reduce the knowledge acquisition bottleneck.   Rule Learning is also present in the text as SAMUEL learns reactive behaviors in the form of stimulus-response rules. The paper describes how SAMUEL learns these behaviors under simulation, and how the learning algorithm was designed to learn useful behaviors from simulations of limited fidelity. The paper also describes specific behaviors that have been learned for simulated autonomous aircraft, autonomous underwater vehicles, and robots, including dog fighting, missile evasion, tracking, navigation, and obstacle avoidance.
Neural Networks.   Explanation: The paper discusses the performance of feedforward neural networks with sigmoidal activation functions, specifically in the context of minimizing a cost criterion. The paper compares this technique with the classical perceptron learning rule, which is a type of neural network algorithm. The paper also discusses the use of error criteria and the behavior of networks with hidden units. Overall, the paper is focused on the theory and application of neural networks.
Probabilistic Methods.   Explanation: The paper describes a hierarchical Bayesian framework for combining existing models for longitudinal and spatial data, and uses Markov chain Monte Carlo methods for data analysis. These are all examples of probabilistic methods in AI, which involve modeling uncertainty and making predictions based on probability distributions.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper discusses the use of decision trees as a benchmark for classifier learning. Decision trees are a type of rule-based classifier.   Probabilistic Methods are also present in the text as the paper discusses the use of probabilistic classifiers, such as Naive Bayes, as part of the benchmark. The paper also discusses the use of probabilistic measures, such as accuracy and error rates, to evaluate the performance of the classifiers.
Genetic Algorithms, Theory  Explanation:  The paper primarily discusses the Schema Theorem and its implications for the performance of genetic algorithms. It also touches upon Price's Covariance and Selection Theorem, which is related to the genetic algorithm framework. Therefore, the paper is most related to Genetic Algorithms. Additionally, the paper is focused on theoretical analysis and does not involve any practical implementation or application of AI techniques, which makes it related to Theory.
This paper does not belong to any of the sub-categories of AI listed. It is a technical report on the use of independent component analysis to analyze simulated EEG data using a three-shell spherical head model. The paper does not discuss any AI techniques or applications.
Rule Learning, Theory.   Rule Learning is present in the text as the approach described searches for accurate entailments of a Horn Clause domain theory.   Theory is also present in the text as the paper describes an approach to analytic learning that involves applying a set of operators to derive frontiers from domain theories.
Probabilistic Methods, Rule Learning  Probabilistic Methods: The paper discusses learning methods that use negatively correlated features of the data, which is a probabilistic approach to filtering documents.  Rule Learning: The paper evaluates the stability of several different learning methods under direct transfer, which involves transferring learned filters from one user to another. This is a form of rule learning, where the learned filters can be seen as rules for filtering documents. The paper also proposes a variation on a feature selection method that has been widely used in text categorization, which can be seen as a rule-based approach to feature selection.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper presents a coevolutionary architecture utilizing a genetic algorithm to evolve artificial neural networks.   Neural Networks: The paper focuses on evolving artificial neural networks using the coevolutionary architecture.
Probabilistic Methods.   Explanation: The paper introduces an algorithm, lllama, which combines simple pattern recognizers to estimate the entropy of a sequence. The algorithm uses probabilistic methods to build a model of the sequence and perform maximum a posteriori classification. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Rule Learning.   Explanation: The paper discusses the RISE algorithm, which is a rule induction algorithm. The paper compares different methods of speeding up RISE, specifically partitioning and windowing. The focus is on improving the efficiency and accuracy of rule induction, which falls under the category of rule learning in AI.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper describes an Artificial Life model that uses evolutionary training to control a mobile robot. The neural networks controlling the robot's behavior evolve through genetic duplications, which is a key feature of genetic algorithms.   Neural Networks: The paper focuses on the emergence of modular neural networks in the evolutionary process. The neural networks are trained to control the robot's behavior, and the modular architecture that emerges is a result of genetic duplications.
Neural Networks, Probabilistic Methods, Theory.   Neural Networks: The paper describes a new theory of differential learning for a broad family of pattern classifiers, including many well-known neural network paradigms.   Probabilistic Methods: The paper contrasts differential learning with traditional probabilistic learning strategies and provides proofs that differential learning is more efficient in its information and computational resource requirements.   Theory: The paper presents a new theory of differential learning and provides a series of proofs to support its claims.
Neural Networks.   Explanation: The paper proposes an automatic construction method for a neural network and describes a hypothesis-driven constructive induction approach to expanding neural networks. The method is applied to ten problems using the backpropagation algorithm. The paper does not mention any other sub-categories of AI such as case-based, genetic algorithms, probabilistic methods, reinforcement learning, rule learning, or theory.
Rule Learning, Theory.   Rule Learning is present in the text as the paper investigates the accuracy of concepts learned from examples using the foil learning algorithm. Theory is also present as the paper discusses and compares different estimators of the accuracy of learned concepts.
Probabilistic Methods.   Explanation: The paper describes a decision-theoretic architecture using dynamic probabilistic networks to address the problem of driving an autonomous vehicle in highway traffic. The architecture is designed to handle sensor noise, sensor failure, and uncertainty about the behavior of other vehicles and the effects of one's own actions. The approach has been implemented in a computer simulation system, and the autonomous vehicle successfully negotiates a variety of difficult situations. The use of probabilistic methods is central to the approach described in the paper.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper evaluates two machine learning algorithms, RIPPER and sleeping experts, on text categorization problems. These algorithms construct classifiers that allow the "context" of a word to affect how (or even whether) the presence or absence of the word will contribute to a classification. This approach involves probabilistic modeling of the relationship between words and categories.  Rule Learning: Both RIPPER and sleeping experts construct classifiers based on rules that capture the contextual information of words. However, they differ in their methods of combining contexts and searching for a combination of contexts. The paper evaluates the performance of these rule-based classifiers on a variety of text categorization problems.
Rule Learning, Theory.   Explanation:  The paper describes a method for finding the optimal parameter settings for a given learning algorithm using a particular dataset as training data. The method involves exploring the space of parameter values using best-first search and cross-validation, which is a form of rule learning. The paper also discusses the theoretical basis for the method and reports experimental results, indicating that it is effective in improving the performance of the learning algorithm.
Rule Learning, Case Based.   Rule Learning is present in the text as one of the current analysis techniques implemented in the Feature Vector Editor. The system features an advanced interface that makes it intuitive for people to manipulate data and discover significant relationships. The system encapsulates data within objects and defines generic protocols that mediate all interactions between data, users and analysis algorithms.   Case Based is present in the text as the SHER-FACS International Conflict Management dataset is used as an empirical study for the Feature Vector Editor. The more sophisticated research reformulates SHERFACS conflict codings as machine-parsable narratives suitable for processing into semantic representations by the RELATUS Natural Language System. Experiments with 244 SHERFACS cases demonstrated the feasibility of building knowledge bases from synthetic texts exceeding 600 pages.
Theory.   Explanation: This paper presents a theoretical analysis of feedback loops with saturation nonlinearities using input-output analysis. While the paper does mention some specific examples and applications, such as control systems and biological systems, the focus is on developing a theoretical framework for understanding the behavior of these systems. There is no explicit use or discussion of any specific AI sub-category such as neural networks or reinforcement learning.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper introduces two boosting algorithms that aim to increase the generalization accuracy of a given classifier. Boosting is a probabilistic method that combines multiple weak classifiers to create a strong classifier.   Rule Learning: Both algorithms construct a complementary level-0 classifier that can only generate coarse hypotheses for the training data. This is a form of rule learning, where the level-0 classifier generates rules or hypotheses for the data.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms (GAs) are mentioned in the abstract and are one of the approaches described in the paper for selecting discrete point data. The paper explores extensions of the standard GA method, which employ multiple parallel populations.   Probabilistic Methods are also mentioned in the abstract as a way to evaluate the information available from a set of discrete object measurements. Population-Based Incremental Learning (PBIL), which is a probabilistic method, is also described as one of the approaches for selecting discrete point data. The paper explores extensions of the standard PBIL method, which also employ multiple parallel populations.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper describes Fossil, an ILP system that learns useful concepts through the use of a search heuristic based on statistical correlation. Fossil's stopping criterion is also discussed, which is independent of the number of training examples and instead depends on a search heuristic that estimates the utility of literals on a uniform scale.   Probabilistic Methods are also present in the text as Fossil's search heuristic is based on statistical correlation, which involves the use of probabilities to measure the strength of relationships between variables.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the limitations of probability theory as a descriptive and normative model of judgment under uncertainty. It also proposes a new normative model that takes into account the system's insufficient knowledge and resources.   Theory: The paper presents a theoretical argument about the inadequacy of probability theory as a model of human judgment under uncertainty and proposes a new normative model based on the assumption of insufficient knowledge and resources.
Genetic Algorithms.   Explanation: The paper presents an approach to the interactive development of programs for image enhancement using Genetic Programming (GP) based on pseudo-colour transformations. The user drives GP by deciding which individual should be the winner in tournament selection, allowing for running GP without a fitness function and transforming it into an efficient search procedure. The paper also proposes a strategy to further reduce user interaction by recording the choices made by the user in interactive runs and using them to build a model which can replace the user in longer runs. These are all characteristics of Genetic Algorithms.
Theory.   Explanation: The paper presents a functional theory of the reading process and argues that it represents a coverage of the task. The theory combines experimental results from psychology, artificial intelligence, education, and linguistics, along with the insights gained from the authors' own research. The paper does not discuss any specific sub-category of AI such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Genetic Algorithms.   Explanation: The paper focuses on speeding up Genetic Programming, which is a subfield of Genetic Algorithms. The authors propose a parallel implementation using the Bulk Synchronous Parallel Programming (BSP) model to improve the efficiency of the algorithm. The paper does not discuss any other sub-categories of AI.
Theory  Explanation: The paper focuses on the theoretical analysis and empirical simulations of the performance of a memoryless vector quantizer as a function of its training set size. While the paper does not explicitly mention any of the other sub-categories of AI, it does not involve the application of any specific AI techniques such as neural networks or reinforcement learning.
Theory. This paper deals with the theoretical analysis of finite-gain input/output stabilization of linear systems with saturated controls. It does not involve the implementation or application of any specific AI techniques such as neural networks, genetic algorithms, or reinforcement learning.
Neural Networks, Theory.   Neural Networks: The paper mentions the implementation of the feedback laws as "single hidden layer neural networks" of simple saturation functions.   Theory: The paper presents a theoretical result on the conditions for global stabilization of linear discrete-time systems using bounded feedback laws. The proof provides an algorithm for the construction of such feedback laws.
Theory.   Explanation: The paper presents a theoretical approach to solving the NP-complete problem of separating two disjoint point sets in n-dimensional real space using a bilinear program. The paper does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Mathematical Programming, Theory.   Explanation: The paper belongs to the sub-category of AI known as Mathematical Programming, as it formulates the problem of feature selection as a mathematical program with linear constraints and a parametric objective function. It also belongs to the sub-category of Theory, as it discusses the theoretical considerations and effectiveness of different approaches to feature selection.
Neural Networks, Rule Learning.   Neural Networks: The paper describes the use of a Recurrent Neural Network Pushdown Automaton (NNPDA) to infer Deterministic Context-free (DCF) Grammars. The NNPDA consists of a recurrent neural network connected to an external stack memory through a common error function.   Rule Learning: The paper discusses how the NNPDA is able to learn the dynamics of an underlying pushdown automaton from examples of grammatical and non-grammatical strings. The network learns the state transitions in the automaton, as well as the actions required to control the stack. The paper also discusses the use of hints to enhance the network's learning capabilities.
Genetic Algorithms.   Explanation: The paper discusses the implementation and performance analysis of a hardware-based genetic algorithm (HGA). It explains how genetic algorithms are a robust problem-solving method based on natural selection and how hardware's speed advantage and ability to parallelize offer great rewards to genetic algorithms. The paper also describes how the HGA was designed using VHDL to allow for easy scalability and act as a coprocessor with the CPU of a PC. Therefore, the paper primarily belongs to the sub-category of Genetic Algorithms in AI.
Probabilistic Methods.   Explanation: The paper discusses the use of hidden Markov models (HMMs) as a probabilistic tool for time series modeling, and presents a generalization of HMMs called factorial hidden Markov models (FHMMs) that factor the hidden state into multiple variables. The paper also discusses the use of inference and learning algorithms based on computing posterior probabilities, and compares exact and approximate methods for carrying out these computations. The paper's focus on probabilistic modeling and inference makes it most closely related to the sub-category of Probabilistic Methods within AI.
Probabilistic Methods, Neural Networks, Theory.   Probabilistic Methods: The paper develops a mean field approximation for inference and learning in probabilistic neural networks.   Neural Networks: The paper focuses on probabilistic neural networks and how to improve their inference and learning using mean field theory.   Theory: The paper presents a refined mean field theory that exploits the existence of large substructures in probabilistic neural networks. It also shows how to incorporate weak higher order interactions into a first-order hidden Markov model within mean field theory.
Probabilistic Methods.   Explanation: The paper focuses on Bayesian learning, which is a probabilistic method for learning. The chapter uses probability and Bayes' rule extensively to address the problem of learning. The paper also discusses the controversy surrounding Bayesian and orthodox statistics in the neural networks community. Therefore, the paper belongs to the sub-category of Probabilistic Methods in AI.
Rule Learning, Theory.   Explanation:  This paper belongs to the sub-category of Rule Learning as it presents a theory of learning classification rules. The paper discusses the process of learning rules from examples and how to evaluate the performance of the learned rules. It also discusses the limitations of rule learning and suggests ways to overcome them.   Additionally, the paper belongs to the sub-category of Theory as it presents a theoretical framework for learning classification rules. The paper discusses the mathematical foundations of rule learning and provides a formal definition of the problem. It also presents a theoretical analysis of the performance of different rule learning algorithms.
Probabilistic Methods.   Explanation: The paper discusses the use of Boltzmann distribution, which is a probabilistic method, in the bits-back coding approach. The theory behind bits-back coding also involves probabilistic modeling of the source code.
Rule Learning, Theory.   The paper belongs to the sub-category of Rule Learning because it discusses the use of AQ17-DCI, a rule-based system, for constructive induction from data. The authors describe how the system generates rules from examples and how it can be used to learn new rules from data.   The paper also belongs to the sub-category of Theory because it presents a theoretical framework for constructive induction and discusses the limitations of existing approaches. The authors propose a new approach based on AQ17-DCI and provide experimental results to support their claims. They also discuss the implications of their approach for future research in the field.
Neural Networks.   Explanation: The paper describes the use of a modified Recurrent Neural Network (RNN) to learn the structure of interconnection networks. The entire paper is focused on the use of neural networks for this purpose, and there is no mention of any other sub-category of AI.
Probabilistic Methods.   Explanation: The paper discusses the representation requirements for knowledge-based decision modeling, which involves dealing with uncertain knowledge. The paper identifies a set of inference patterns and knowledge types, which are relevant to probabilistic methods. The paper also discusses the need for integrating categorical and uncertain knowledge in a context-sensitive manner, which is a key aspect of probabilistic reasoning. Therefore, the paper is most related to probabilistic methods in AI.
Neural Networks, Theory.   Neural Networks: The paper proposes a computational framework for understanding and modeling human consciousness, which involves a network of computational modules. The simulations described in the paper also involve neural network models.  Theory: The paper presents a theoretical perspective on human consciousness and its relationship to cognitive information processing. It proposes that the contents of consciousness correspond to temporally persistent states in a network of computational modules, and explores the idea that periodic settling to persistent states improves performance. The paper also integrates existing theoretical perspectives on consciousness.
Theory.   Explanation: The paper discusses algorithms developed in the context of computational learning theory, which is a subfield of AI focused on understanding the theoretical foundations of machine learning. The paper does not discuss any specific application or implementation of AI, such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Rule Learning.   Explanation: The paper traces the development of ideas from psychology and early efforts in Artificial Intelligence, which combined with formal methods of inductive inference to evolve into the present discipline of Inductive Logic Programming. Inductive Logic Programming is a form of rule learning, where rules are induced from examples. The paper focuses on the historical development of this field, and does not discuss other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Theory.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses the use of recurrent neural networks for recognizing and generating temporal sequences, and for behaving like deterministic sequential finite-state automata. The paper also discusses the use of algorithms for extracting grammatical rules from trained networks.   Rule Learning: The paper specifically focuses on the ability of recurrent neural networks to perform rule revision, by comparing inserted rules with the rules in the finite-state automata extracted from trained networks. The paper also discusses the results of training a recurrent neural network to recognize a known non-trivial, randomly generated regular grammar, and the ability of the network to correct through training inserted rules which were initially incorrect.
Genetic Algorithms.   Explanation: The paper discusses recombination operators in evolutionary algorithms, which are a type of genetic algorithm. The paper specifically focuses on multi-parent recombination, which is a technique used in genetic algorithms to create offspring from multiple parent solutions.
Neural Networks.   Explanation: The paper discusses a framework for improving the performance of backpropagation learning algorithms in neural networks by utilizing the structural information of the network instead of discarding it. The paper specifically focuses on the characteristic scale of weight changes and how it can be matched to the residuals, allowing structural properties such as a node's fan-in and fan-out to affect the local learning rate and backpropagated error. Therefore, the paper belongs to the sub-category of AI known as Neural Networks.
Rule Learning.   Explanation: The paper presents a method for learning concept descriptions that combine both M-of-N rules and traditional Disjunctive Normal form (DNF) rules using the hypothesis-driven constructive induction approach. The search for hypotheses is done by the standard AQ inductive rule learning algorithm. The paper also discusses the need for M-of-N rules and how they are detected by observing "exclusive-or" or "equivalence" patterns in the hypotheses. Therefore, the paper belongs to the sub-category of AI called Rule Learning.
Case Based, Rule Learning.   The paper belongs to the sub-category of Case Based AI because it discusses the CBR (Conversational Case-Based Reasoning) approach and its bias in case scoring algorithm. The paper also belongs to the sub-category of Rule Learning because it introduces an approach for eliminating the bias in case scoring algorithm.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms: The paper investigates the effectiveness of genetic algorithms as combinatorial function optimizers for four specific problems. The authors compare the performance of genetic algorithms to that of stochastic hillclimbing, which serves as a baseline method for evaluation. The paper also discusses how insights gained from stochastic hillclimbing can lead to improvements in the encoding used by a genetic algorithm.  Probabilistic Methods: Stochastic hillclimbing is a probabilistic method for optimization that involves making random changes to a solution and accepting the change if it improves the objective function. The paper uses stochastic hillclimbing as a baseline method for evaluating the performance of genetic algorithms. The authors demonstrate that simple stochastic hillclimbing methods can achieve results comparable or superior to those obtained by genetic algorithms for the four problems studied.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the effectiveness of concept sharing in improving learning in the same domain. This involves using substructures or entire structures of previous concepts, which may aid in learning other concepts. The evaluation of the effectiveness of concept sharing is done with respect to accuracy, concept size, search complexity, and noise resistance, which are all probabilistic measures.  Theory: The paper discusses the concept of sharing between related concepts to improve learning in the same domain. It proposes the use of substructures or entire structures of previous concepts to aid in learning other concepts. The paper also evaluates the effectiveness of concept sharing with respect to various measures, which are based on theoretical concepts.
Genetic Algorithms.   Explanation: The paper's title explicitly mentions a "Parallel Genetic Algorithm," and the abstract mentions that the work was supported by the Office of Scientific Computing, U.S. Department of Energy, which suggests a focus on computational methods. The thesis adviser, Dr. Tom Christopher, is also a well-known researcher in the field of genetic algorithms. While other sub-categories of AI may be relevant to the research, such as optimization or parallel computing, the use of genetic algorithms is the most prominent and central aspect of the paper.
Theory.   Explanation: The paper discusses theoretical methods for improving the generalization performance and speed of Support Vector Machines (SVMs). It does not focus on any specific application or implementation of AI, but rather on the underlying mathematical and computational principles of SVMs. Therefore, it falls under the sub-category of Theory in AI.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses a limitation of neural networks and presents an algorithm for extracting comprehensible representations from them.   Rule Learning: The algorithm presented in the paper, Trepan, uses queries to induce a decision tree that approximates the concept represented by a given network. This decision tree can be seen as a set of rules that capture the behavior of the network.
Probabilistic Methods.   Explanation: The paper discusses the relationships among chance, weight of evidence, and degree of belief, which are all concepts related to probability theory. The paper also critiques Dempster-Shafer theory, which is a probabilistic method for managing uncertainty. The new approach introduced in the paper also shares many intuitive ideas with D-S theory, but avoids the problem of inconsistency.
Theory  Explanation: The paper focuses on formalizing and proving the theoretical basis of Explanation-Based Learning of macro-operators, using a generalization of Probably Approximately Correct (PAC) learning to problem solving domains. The paper does not discuss or apply any specific sub-category of AI such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Theory.   Explanation: The paper presents a theoretical method of incorporating prior knowledge about transformation invariances in support vector learning machines. It does not involve any specific application or implementation of other sub-categories of AI such as neural networks, probabilistic methods, or reinforcement learning.
Genetic Algorithms, Rule Learning.   Genetic Algorithms: The paper reports on the use of genetic algorithms to learn decision rules for complex robot behaviors. The method involves evaluating hypothetical rule sets on a simulator and applying simulated evolution to evolve more effective rules.   Rule Learning: The paper's main contribution is the learning of decision rules for a complex shepherding task involving multiple mobile robots. The learned rules are verified through experiments on operational mobile robots.
Rule Learning, Theory.   Rule Learning is the most related sub-category as the paper proposes an extension of the traditional feature-vector representation to allow set-valued features, and argues that many decision tree and rule learning algorithms can be easily extended to set-valued features.   Theory is also relevant as the paper discusses the connection between the proposed extension and Blum's "infinite attribute" representation, and argues for the efficiency and naturalness of using set-valued features in certain real-world learning problems.
Neural Networks.   Explanation: The paper focuses on a system for training feedforward simple recurrent networks, which are a type of neural network. The authors discuss the challenges of training these networks and propose a solution to improve efficiency and correctness. The paper does not mention any other sub-categories of AI such as case-based reasoning, genetic algorithms, probabilistic methods, reinforcement learning, rule learning, or theory.
Reinforcement Learning, Probabilistic Methods.   Reinforcement Learning is the main focus of the paper, as the authors present two algorithms that perform input generalization to address the challenges of credit assignment and uncertainty in mobile robot learning.   Probabilistic Methods are also mentioned as a complicating factor in the learning problem, due to noisy sensors and effectors in complex dynamic environments. The algorithms presented in the paper trade off long-term optimality for immediate performance and flexibility, which suggests a probabilistic approach to decision-making.
Probabilistic Methods.   Explanation: The paper discusses the use of state space models to estimate the time evolution of empirical volatilities, explicitly including observational noise. State space models are a type of probabilistic method commonly used in time series analysis. The paper also mentions the use of stochastic volatility models and GARCH models, which are also probabilistic methods commonly used in finance.
Reinforcement Learning, Theory.  Explanation:  - Reinforcement Learning: The paper presents TDLeaf(), a variation on the TD() algorithm that enables it to be used in conjunction with minimax search. The algorithm is used to learn the evaluation function of a chess program while playing on the Free Internet Chess Server (FICS). The success of the program is attributed to the use of TDLeaf() and reinforcement learning.  - Theory: The paper discusses the relationship between the results obtained by KnightCap and Tesauro's results in backgammon. It also presents some experiments and discusses the reasons for the success of the program.
Case Based, Rule Learning, Theory.   Case Based: The paper discusses the use of analogy in automated theorem proving, which involves finding similarities between different cases and using them to solve new problems.   Rule Learning: The paper discusses the use of rules and heuristics in automated theorem proving, such as the use of structural similarity and the application of previously proven theorems.   Theory: The paper discusses the theoretical foundations of automated theorem proving, including the use of logic and formal systems to represent and manipulate knowledge. It also discusses the limitations and challenges of current approaches and suggests directions for future research.
Theory.   Explanation: This paper presents a theoretical approach to the problem of minimizing misclassified points by a plane in n-dimensional real space. It formulates the problem as a linear program with equilibrium constraints (LPEC) and proposes a Frank-Wolfe-type algorithm for solving the associated penalty problem. The paper does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Genetic Algorithms, Reinforcement Learning  The paper belongs to the sub-categories of Genetic Algorithms and Reinforcement Learning.   Genetic Algorithms: The paper describes Echo as a generic ecosystem model in which evolving agents are represented by a set of genetic algorithms. The agents evolve through a process of selection, crossover, and mutation, which is similar to the process of natural selection in biological systems.   Reinforcement Learning: The paper also describes how the agents in Echo learn through reinforcement learning. The agents receive rewards or punishments based on their actions, and they use this feedback to adjust their behavior and improve their performance over time. The paper discusses how this process of reinforcement learning can lead to the emergence of complex behaviors and strategies in the ecosystem.
Neural Networks. This paper belongs to the sub-category of Neural Networks as it discusses the optimization of weight updates in neural networks. The authors propose centering all factors involved in the weight update, including the slope of hidden unit activation functions, to improve credit assignment in networks with shortcut connections and speed up learning without adversely affecting the trained network's generalization ability. The paper also references previous work on centering input and hidden unit activities and error signals in neural networks.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses ASOCS as a type of non-von Neumann architecture that uses numerous simple processing elements with modifiable weighted links to achieve a high degree of parallelism. It also mentions that ASOCS models support efficient computation through self-organized learning and parallel execution, which is similar to the goals of current neural network models.   Rule Learning: The paper proposes an ASOCS model for massively parallel processing of incrementally defined rule systems in areas such as adaptive logic, robotics, logical inference, and dynamic control. It also discusses how ASOCS incorporate rules into an adaptive logic network in a parallel and self-organizing fashion, and how the model learns by modifying its topology through the incremental presentation of rules and/or examples. The paper also proposes a learning algorithm and architecture for Priority ASOCS, which uses rules with priorities and has significant learning time and space complexity improvements over previous models.
Theory, Rule Learning.   Theory is the most related sub-category as the paper focuses on the development and evaluation of a theoretically founded algorithm for agnostic PAC-learning of decision trees. The paper also discusses the theoretical guarantees of the algorithm and its differences from other learning algorithms in terms of performance on new datasets.   Rule Learning is also applicable as the paper specifically focuses on the learning of decision trees, which are a type of rule-based model. The algorithm T2 is designed to learn decision trees of at most 2 levels, and the paper evaluates its performance on real-world datasets compared to the widely used C4.5 algorithm for decision tree learning.
Neural Networks. This paper belongs to the sub-category of Neural Networks as it discusses the performance of neural network simulations and the distribution of results for practical problems. The paper also presents a controlled task to analyze the distribution of performance.
Probabilistic Methods.   Explanation: The paper discusses a qualitative framework for probabilistic inference, which is a key aspect of probabilistic methods in AI. The author introduces algorithms for probabilistic inference and discusses their applications in various fields. The paper does not discuss any other sub-categories of AI mentioned in the options.
Genetic Algorithms, Probabilistic Methods, Theory.   Genetic Algorithms: The paper describes the simulation of the evolution of minimats, which involves the use of genetic algorithms to generate new generations of minimats with different inherited probability distributions for their behaviors.   Probabilistic Methods: The minimats behave solely based on picking amongst the actions of moving, eating, reproducing, and sitting according to an inherited probability distribution. The paper also discusses the importance of probability distributions in describing the minimat world.   Theory: The paper aims to establish initial answers to questions about how the structure of an environment affects the behaviors of organisms that have evolved in it. It presents a theoretical framework for studying the impact of global environment structure on individual behavior and discusses the complexity of this study due to the way minimats construct their own environments through their individual behaviors.
Probabilistic Methods.   Explanation: The paper discusses the use of graphical models, which are a type of probabilistic method, for causal inference. The paper also mentions the use of observed distributions and structural equations, which are commonly used in probabilistic modeling.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper discusses various measures for estimating the quality of multi-valued attributes, such as information gain, J-measure, gini-index, and gain-ratio. These measures are based on probabilistic methods and are used to calculate the relevance and importance of different attributes.  Theory: The paper also introduces a new function based on the MDL principle, which is a theoretical concept used in information theory. The function is designed to estimate the quality of multi-valued attributes and is based on the idea that the best model is the one that minimizes the description length of the data.
Case Based, Rule Learning  Explanation:  - Case Based: The paper discusses a nearest neighbor algorithm for learning from examples, which is a type of case-based reasoning.  - Rule Learning: The algorithm described in the paper calculates distance tables and attaches weights to instances, which can be seen as creating rules for classification.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the performance improvement of the Naive-Bayes algorithm when features were discretized using an entropy-based method. Naive-Bayes is a probabilistic algorithm that relies on Bayes' theorem to make predictions.  Rule Learning: The paper also discusses the performance improvement of the C4.5 induction algorithm when features were discretized in advance. C4.5 is a rule learning algorithm that builds decision trees based on the discretized features.
Genetic Algorithms.   Explanation: The paper explicitly discusses the use of genetic algorithms to evolve cellular automata for computational tasks. The entire paper is focused on this topic and does not discuss any other sub-category of AI.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses cellular automata that were evolved for performing certain computational tasks, which implies the use of genetic algorithms to evolve the CAs. The embedded-particle models introduced in the paper are evaluated by comparing their estimated performances with the actual performances of the CAs they model, which is a common approach in genetic algorithm-based research.  Theory: The paper presents a framework for describing the emergent computational strategies observed in evolved CAs, which can be considered a theoretical contribution to the field of AI. The authors also show that their framework captures the main information processing mechanisms of the emergent computation that arise in these CAs, which further supports the theoretical nature of the paper.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper introduces an analytical model for the dynamics of a mutation-only genetic algorithm (GA) and describes the GA's population dynamics in terms of flows in the space of fitness distributions. The paper also discusses the occurrence of "fitness epochs" and the innovations between them, which are specific to the finite population dynamics of the GA.  Theory: The paper presents a theoretical analysis of the dynamics of the GA, deriving closed-form expressions for the trajectories through fitness distribution space and the metastable fitness distributions during fitness epochs. The paper also analyzes the Jacobian matrices in the neighborhood of an epoch's metastable fitness distribution to reveal the state space's topological structure and derive quantitative predictions for a range of dynamical behaviors. The paper discusses the connections of the results with those from population genetics and molecular evolution theory.
Genetic Algorithms, Rule Learning.   Genetic Algorithms are the main focus of the paper, as the authors explore the application of GAs to a symbolic learning task. The GA concept learner (GABL) is implemented and compared to an incremental concept learner, ID5R.   Rule Learning is also relevant, as the GABL is a rule-based learner that learns a concept from a set of positive and negative examples. The authors note that GABL is effective and competitive with ID5R as the target concept increases in complexity.
Rule Learning, Case Based.   Rule Learning is present in the text as the system uses a set of rules to generate and evaluate design options. The manual also explains how to modify and create new rules to customize the system.   Case Based is present in the text as the system uses a database of past design cases to generate new design options. The manual explains how to search and retrieve cases from the database and how to use them to generate new options.
Probabilistic Methods, Rule Learning, Theory.   Probabilistic Methods: The paper discusses the use of cross-validation and bootstrap methods for accuracy estimation and model selection, which are probabilistic methods commonly used in machine learning.  Rule Learning: The paper reports on a large-scale experiment using the C4.5 and Naive-Bayes algorithms to estimate the effects of different parameters on these algorithms on real-world datasets. These algorithms are examples of rule learning methods in machine learning.  Theory: The paper reviews accuracy estimation methods and compares cross-validation and bootstrap methods. It also reports on recent experimental and theoretical results on the effectiveness of different methods for model selection. These discussions are related to the theoretical aspects of machine learning.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the Naive-Bayes algorithm, which is a probabilistic method for classification. The proposed NBTree algorithm is also a hybrid of decision-tree classifiers and Naive-Bayes classifiers.   Rule Learning: The paper discusses decision trees, which are a type of rule learning algorithm. The proposed NBTree algorithm also uses decision-tree nodes with uni-variate splits.
Probabilistic Methods, Neural Networks, Theory.   Probabilistic Methods: The paper mentions that MLC++ provides general learning algorithms for supervised machine learning, which includes probabilistic methods such as Naive Bayes and logistic regression.  Neural Networks: The paper mentions that MLC++ provides general learning algorithms for supervised machine learning, which includes neural networks.  Theory: The paper discusses the design of MLC++ and how it aims to extract commonalities of algorithms and decompose them for a unified view that is simple, coherent, and extensible. This involves theoretical considerations of how to best organize and present machine learning algorithms.
Probabilistic Methods.   Explanation: The paper discusses Bayesian models involving Dirichlet process mixtures, which are a type of probabilistic method. The focus is on the use and integration of nonparametric ideas in hierarchical models, which is a common application of probabilistic methods in AI. The paper also mentions the use of MCMC methods for computation, which is a common technique in probabilistic modeling.
Probabilistic Methods.   Explanation: The paper focuses on the analysis of a specific probabilistic algorithm, the Bayesian classifier, and explores its behavior in different scenarios. The authors also mention other probabilistic approaches to inductive learning that have been developed in the literature.
Neural Networks, Theory.  Explanation:   - Neural Networks: The paper discusses the importance of regularization for training and optimization of neural network architectures, and proposes a tool for iterative estimation of weight decay parameters in the context of network training and pruning. - Theory: The paper is based on asymptotic sampling theory and provides a scheme for gradient descent in the estimated generalization error with respect to the regularization parameters. The scheme is implemented in the Designer Net framework and is based on the diagonal Hessian approximation. The paper also presents experimental results to demonstrate the viability of the approach.
Neural Networks, Rule Learning.   Neural Networks: The paper describes the Extentron algorithm which grows multi-layer networks using the simple perceptron rule for linear threshold units. The algorithm is compared to other neural network paradigms.  Rule Learning: The paper discusses the perceptron learning algorithm and its convergence properties. The Extentron algorithm is described as using the simple perceptron rule for linear threshold units. The algorithm can be completely specified using only two parameters. The paper also compares the Extentron to symbolic learning systems.
Neural Networks. This paper belongs to the sub-category of Neural Networks. The paper discusses the centering of various factors involved in the network's gradient to improve credit assignment in networks with shortcut connections. The benchmark results show that this approach can speed up learning without adversely affecting the trained network's generalization ability. The paper does not discuss any other sub-categories of AI.
Theory.   Explanation: The paper introduces a new fault-tolerant model of algorithmic learning using an equivalence oracle and an incomplete membership oracle, and analyzes the performance of the algorithm in learning monotone DNF formulas. The paper does not involve any implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Case Based, Theory  Explanation:   - Case Based: This paper discusses the process of generalization over design experiences in order to discover physical principles. This can be seen as a form of case-based reasoning, where the reasoner uses past experiences to make inferences about new situations. However, the paper does not explicitly mention the term "case-based" or describe any specific case-based reasoning algorithms. - Theory: The paper focuses on the task of hypothesis formation and hypothesis testing in the context of discovering physical principles. This can be seen as a theoretical approach to AI, where the goal is to develop general principles that can be applied across different domains. The paper discusses the representation of domain principles as device-independent behavior-function models, which can be seen as a theoretical framework for understanding physical systems.
Case Based, Theory  Explanation:  - Case Based: The paper discusses the use of case-based method in experience-based design and how learning the "right" indices to a case is crucial for the success of the method. The paper also describes how the KRITIK2 system implements and evaluates the model-based method for learning indices to design cases. - Theory: The paper proposes a model-based method for learning indices to design cases using structure-behavior-function (SBF) models. The SBF model of a design provides the functional and causal explanation of how the structure of the design delivers its function. The paper also discusses how the prior design experiences stored in case-memory help to determine the level of index generalization.
Theory. This paper presents a theoretical approach to stabilizing linear systems with bounded controls. It does not involve any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Probabilistic Methods.   Explanation: The paper discusses the Bayesian approach to comparing models and uses reversible jump Markov chain Monte Carlo to calculate posterior probabilities of hierarchical, graphical, or decomposable log-linear models. The choice of suitable prior distributions for model parameters is also discussed in detail. These are all examples of probabilistic methods in AI.
Case Based, Rule Learning  Explanation:   - Case Based: The paper discusses a system that can learn new indices for existing explanatory schemas. This involves identifying relevant pieces of information (indices) in a given situation that trigger the relevant schema in the system's memory. This is similar to how a case-based reasoning system would identify relevant cases based on their similarity to the current situation.  - Rule Learning: The paper discusses two methods using which the system can identify the relevant schema even if the input does not directly match an existing index, and learn a new index to allow it to retrieve this schema more efficiently in the future. This involves learning new rules or conditions for selecting the appropriate schema based on the input.
Theory.   Explanation: The paper describes a method for synthesizing H 1 controllers online using the exact plant model on a finite interval into the future. The approach is based on deriving inequalities from the two Riccati differential equation solution to the finite horizon H 1 problem and exploiting the resulting freedom to construct controllers that satisfy a new robust performance condition. The paper does not mention any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Theory.   Explanation: The paper presents a new semi-lattice based system, IGLUE, that uses Galois lattices or concept lattices for concept learning. The paper does not discuss or use any of the other sub-categories of AI listed in the question.
Genetic Algorithms, Rule Learning.   Genetic Algorithms (GAs) are a key component of the proposed hybrid learning methodology, as they are used to search the space of all possible subsets of candidate discrimination features. The fitness function used by the GA is based on the classification performance of decision trees produced by the ID3 algorithm.   Rule Learning is also relevant, as the ID3 algorithm used in this approach produces decision trees that can be interpreted as sets of rules for classification. The paper discusses how the resulting decision trees can be used to identify important discriminatory features and gain insights into the underlying patterns in the data.
Neural Networks, Rule Learning.   Neural Networks: The paper presents a neural network architecture that can manage structured data and refine knowledge bases expressed in a first order logic language. The presented framework is well suited to classification problems in which concept descriptions depend upon numerical features of the data. The paper discusses a method to translate a set of classification rules into neural computation units and algorithms to refine network weights on structured data.   Rule Learning: The paper discusses a method to translate a set of classification rules into neural computation units. The classification theory to be refined can be manually handcrafted or automatically acquired by a symbolic relational learning system able to deal with numerical features. The primary goal is to bring into a neural network architecture the capability of dealing with structured data of unrestricted size, by allowing to dynamically bind the classification rules to different items occurring in the input data.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper discusses the use of genetic algorithms in neuro-evolution, specifically in terms of generating offspring with different crossovers and evaluating them based on their performance compared to the population. The concept of culling overlarge litters is also a common technique in genetic algorithms.  Neural Networks: The paper focuses on the population of neural nets and how their behaviors can be used as a form of culture to improve neuro-evolution. The technique of teaching offspring using backpropagation is also a common method in neural network training.
Theory.   Explanation: The paper describes the Structure-Mapping Engine (SME) and its design, which is based on Gentner's Structure-mapping theory of analogy. The paper also discusses the complexity of the algorithm and provides examples of its operation taken from cognitive simulation studies and work in machine learning. While the paper does touch on machine learning, it is primarily focused on exploring and testing a specific theory of analogy, making it most closely related to the sub-category of Theory.
Theory.   Explanation: This paper proposes a theoretical model for representing and understanding the cognitive processes involved in invention, using the Structure-Behavior-Function language and the ACT-R architecture. It does not involve the application of any specific AI sub-category such as case-based reasoning, neural networks, or reinforcement learning.
Probabilistic Methods, Rule Learning  The paper belongs to the sub-category of Probabilistic Methods because it uses a k-nearest neighbor classifier, which is a probabilistic method for classification. The paper also belongs to the sub-category of Rule Learning because it uses a diabolo classifier, which is a rule-based classifier. The diabolo classifier is designed to be invariant under transformations like rotation, scale or slope and can deal with variations in stroke order and writing direction.
Probabilistic Methods, Neural Networks  Explanation:  - Probabilistic Methods: The paper discusses the use of Boltzmann machines for probability density estimation, and compares the results obtained through decimatable Boltzmann machines to those obtained through Gibbs sampling.  - Neural Networks: The paper specifically focuses on Boltzmann machines, which are a type of neural network. The decimation technique used in the paper is also a neural network-based approach.
Theory. The paper is focused on the theoretical aspects of function learning, including the structure of learning models, the optimal learning cost, and the relationship between learning costs for different function classes. While the paper does mention an efficient learning algorithm for a specific class of functions, this is presented as a theoretical result rather than a practical application of a specific AI sub-category.
Neural Networks.   Explanation: The paper discusses the limitations of neural networks in terms of comprehensibility of the acquired concepts and proposes algorithms for extracting "symbolic" concept representations from trained neural networks. The entire paper is focused on neural networks and their limitations in terms of comprehensibility.
Theory.   Explanation: The paper belongs to the sub-category of AI theory as it introduces a formal model of learning and analyzes the impact of knowledge gaps on learning for various concepts. The paper does not discuss any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Theory  Explanation: The paper presents several techniques for estimating the generalization error of a bagged learning algorithm without invoking more training of the underlying learning algorithm. It also discusses the bias-variance decomposition and how it can be used to estimate the generalization error. These are all theoretical concepts related to machine learning. The paper does not discuss any specific sub-category of AI such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Genetic Algorithms.   Explanation: The paper presents an algorithm for the discovery of building blocks in genetic programming (GP), which is a subfield of evolutionary computation that uses genetic algorithms as a search and optimization technique. The paper describes how the algorithm adapts the problem representation by extending the set of terminals and functions with a set of evolvable subroutines, which is a common approach in genetic programming. The paper also discusses how the algorithm supports subroutine creation and deletion based on differential parent-offspring fitness and block activation, which are key concepts in genetic algorithms. Therefore, this paper belongs to the sub-category of AI known as Genetic Algorithms.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper describes a technique based on sampling the input-output behavior of a Boolean formula on a probability distribution determined by the fixed point of the formula's amplification function. The authors perform statistical tests on variants of the fixed-point distribution to infer structural information about the formula.   Theory: The paper presents a new technique for exactly identifying certain classes of read-once Boolean formulas and applies it to prove the existence of short universal identification sequences for large classes of formulas. The authors also describe extensions of their algorithms to handle high rates of noise and to learn formulas of unbounded depth in Valiant's model with respect to specific distributions. The research is supported by grants from various organizations.
Rule Learning, Theory.   Rule Learning is the most related sub-category as the paper deals with learning k-term DNF formulas using equivalence queries and incomplete membership queries. The algorithm described in the paper is a rule-based algorithm that identifies a k-term DNF formula with a k-term DNF hypothesis.   Theory is also relevant as the paper discusses the theoretical aspects of learning k-term DNF formulas using the given model of equivalence queries and incomplete membership queries. The paper presents a polynomial-time algorithm for exact identification of a k-term DNF formula, which is a theoretical result.
Neural Networks.   Explanation: The paper discusses the implementation of feedforward neural networks with dynamic topologies using Location-Independent Transformations (LITs). It specifically presents LITs for two types of neural networks: the single-layer competitive learning network and the counterpropagation network. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods.   Explanation: The paper presents a method for obtaining local error bars for nonlinear regression by applying a maximum-likelihood framework to an assumed distribution of errors. This approach is based on probabilistic methods, which involve modeling uncertainty and making predictions based on probability distributions. The paper also mentions that the method assumes a normally distributed target noise and provides estimates of model misspecification, which are both probabilistic concepts. Therefore, this paper belongs to the sub-category of AI known as Probabilistic Methods.
Case Based, Reinforcement Learning  Explanation:  - Case Based: The paper is about improving case-based reasoning (CBR) systems by addressing the feature selection problem for case similarity retrieval.  - Reinforcement Learning: The paper proposes a method that uses introspective reasoning to learn new features for indexing, which is a form of reinforcement learning. The introspective reasoning component monitors system performance and refines the indices to avoid similar future failures.
This paper belongs to the sub-category of AI called Case Based.   Explanation: The paper discusses a method for case retrieval in which the structure of the case is used to guide the retrieval process. This is a key characteristic of case-based reasoning, which is a subfield of AI that involves solving new problems by adapting solutions from similar past cases. The paper describes how the structure of a case can be represented using a graph-based model, and how this model can be used to guide the retrieval of similar cases. Overall, the paper is focused on the use of past cases to inform problem-solving, which is a core aspect of case-based reasoning.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper proposes an algorithm to learn the structure of the rules that represent the system. The algorithm gives a small set of fuzzy rules that represent the original set of examples.   Probabilistic Methods are present in the text as the algorithm is able to manage with fuzzy information, which is inherently probabilistic in nature. The algorithm is able to learn the structure of the rules that represent the system, which involves dealing with uncertainty and probability.
Theory.   Explanation: The paper focuses on theory refinement as a key concept for knowledge-base maintenance, and provides an overview of the state-of-the-art in theory refinement as a search problem. The other sub-categories of AI (Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, Rule Learning) are not mentioned or discussed in the paper.
Probabilistic Methods, Theory  Probabilistic Methods: The method described in the paper for change detection is based on the generation of T cells in the immune system, which involves probabilistic processes such as random recombination and mutation. The paper also discusses the use of probability distributions to model the behavior of computer viruses.  Theory: The paper presents a theoretical framework for understanding the problem of protecting computer systems as a problem of self-nonself discrimination. It also includes mathematical analysis of the computational costs of the proposed method.
Probabilistic Methods.   Explanation: The paper discusses Markov Chain Monte Carlo (MCMC) methods, which are a type of probabilistic method used for statistical inference. The paper proposes a methodology based on the Central Limit Theorem for Markov chains to assess convergence of MCMC algorithms. The paper also discusses the application fields related to these methods and theoretical convergence properties. Therefore, this paper belongs to the sub-category of Probabilistic Methods in AI.
Reinforcement Learning.   Explanation: The paper explores the application of Temporal Difference (TD) learning, which is a type of reinforcement learning, to forecasting the behavior of dynamical systems with real-valued outputs. The paper compares the performance of TD learning to standard supervised learning and discusses the architecture of neural networks used in both paradigms. Therefore, the paper belongs to the sub-category of AI known as Reinforcement Learning.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper describes a multilayer, unsupervised neural network that builds a hierarchy of representations of sensory input. The network has bottom-up "recognition" connections and top-down "generative" connections that are used to convert sensory input into underlying representations and reconstruct the sensory input from the representations, respectively.   Probabilistic Methods: The paper describes a learning algorithm that involves a "wake" phase and a "sleep" phase, where the network is driven by recognition and generative connections, respectively. The synaptic learning rule is simple and local, and the combined effect of the two phases is to create representations of the sensory input that are efficient in terms of the number of bits required to describe them. The paper also discusses the use of probabilistic models to represent uncertainty in the sensory input and the representations.
Probabilistic Methods.   Explanation: The paper discusses Bayesian inference and the use of priors in defining probability distributions over model parameters (connection weights) in multilayer perceptron networks. The focus is on the implications of these priors for the corresponding priors over functions computed by the network, and how these priors can be defined in a way that allows for infinite networks without overfitting. The paper also explores the properties of different types of priors, including Gaussian and non-Gaussian stable distributions. Overall, the paper is primarily concerned with probabilistic methods for modeling neural networks.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as the authors present new algorithms for reinforcement learning and prove their convergence to near-optimal performance in polynomial time.   Theory is also relevant, as the paper provides theoretical analysis and proofs of the polynomial bounds on the resources required for the algorithms to achieve near-optimal return in general Markov decision processes.
Case Based, Rule Learning  Explanation:  The paper belongs to the sub-category of Case Based AI as it discusses the basic premise of case-based reasoning and argues for "pure" case-based reasoning. The paper also mentions previous CBR systems such as chef, swale, and hypo, as well as a CBR system being developed by the first author called cookie.   The paper also belongs to the sub-category of Rule Learning as it contrasts reasoning from cases with reasoning from rules, which are facts and if-then structures with no stated connection to any real episodes. The paper argues for pure case-based reasoning, which involves reasoning from representations that are both concrete and reasonably complete.
Reinforcement Learning, Neural Networks.   Reinforcement learning is the main focus of the paper, as it discusses the curse of dimensionality in reinforcement learning and dynamic programming. The paper proposes a new algorithm for reinforcement learning that is safe from divergence yet can still reap the benefits of successful generalization.   Neural networks are also mentioned as a generalizing function approximator that can replace the lookup table in reinforcement learning. The paper discusses the success of using neural nets in the domain of backgammon, but also highlights the lack of guarantee of convergence when using function approximation. The proposed algorithm, Grow-Support, also utilizes neural networks as a function approximator.
Genetic Algorithms.   Explanation: The paper explicitly mentions that it is developing an exact model of a simple genetic algorithm for permutation based representations. The paper discusses the development of mixing matrices for various permutation based operators, which are a key component of genetic algorithms. The paper does not mention any other sub-categories of AI.
Genetic Algorithms, Theory.   Genetic Algorithms are directly mentioned in the paper as one of the types of search algorithms being evaluated using test functions. The paper specifically examines the role of test suites in evaluating evolutionary search algorithms, which are a type of genetic algorithm.   Theory is also relevant as the paper discusses basic principles for developing test suites and examines the characteristics of existing test functions. The paper also proposes new methods for constructing test functions with different degrees of nonlinearity, which involves theoretical considerations.
Probabilistic Methods.   Explanation: The paper presents a new algorithm, contextual ICA, which derives from a maximum likelihood density estimation formulation of the problem. This indicates the use of probabilistic methods in the paper.
Neural Networks.   Explanation: The paper investigates the use of a recurrent connectionist architecture, which is a type of neural network, to develop a parser for natural language inputs. The paper does not mention any other sub-categories of AI.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of multiversion neural-net systems to solve data-defined problems. It explains how these systems can be trained to learn from data and make predictions based on that learning.   Probabilistic Methods: The paper also discusses the use of probabilistic methods in these neural-net systems, specifically Bayesian methods. It explains how these methods can be used to incorporate prior knowledge and uncertainty into the learning process.
Neural Networks, Theory  Explanation:  - Neural Networks: The paper focuses on improving the generalization of neural networks through the use of methodologically diverse network generation processes. - Theory: The paper adapts a statistical framework developed by Littlewood and Miller to investigate the feasibility of exploiting diversity in multiple populations of neural networks. The authors also attempt to order the constituent methodological features with respect to their potential for use in the engineering of useful diversity and explore the use of relative measures of diversity between version sets.
Rule Learning, Machine Learning.   The paper discusses the application of machine learning techniques, specifically rule learning, to adapt knowledge-based systems to changing requirements. The focus is on learning control-knowledge in models of expertise using the KADS model.
Probabilistic Methods  Explanation: The paper discusses the winning algorithm of Rodney Price, which orders state merges according to the amount of evidence in their favor. This approach involves probabilistic reasoning, as it considers the likelihood of a state merge being correct based on the available evidence. Therefore, the paper belongs to the sub-category of Probabilistic Methods in AI.
Neural Networks, Probabilistic Methods, Theory.   Neural Networks: The paper discusses the activity of neurons in the cortex and how they transition between different states. This is a classic topic in neural network research.  Probabilistic Methods: The authors use statistical methods to analyze the data and make inferences about the underlying processes.  Theory: The paper presents a theoretical framework for understanding the dynamics of cortical activity and how it relates to cognitive processes. The authors propose a model that explains the observed phenomena and makes predictions about future experiments.
Theory.   Explanation: The paper presents a new algorithm for decision tree pruning and proves a strong performance guarantee for the generalization error of the resulting pruned tree. The focus is on developing tools of local uniform convergence to analyze the algorithm, which is a theoretical approach to improving decision tree pruning. The paper does not mention any of the other sub-categories of AI listed.
Neural Networks, Theory.   Neural Networks: The paper discusses the Support Vector Machine (SVM) as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. It also mentions the optimization of the objective function by solving a large-scale quadratic programming problem with linear and box constraints. These are all related to neural networks.  Theory: The paper discusses the derivation of Support Vector Machines, its relationship with Structural Risk Minimization (SRM), and its geometrical insight. It also mentions the inductive principle of SRM, which aims at minimizing a bound on the generalization error of a model. These are all related to theory.
Theory.   Explanation: This paper describes an algorithm for learning finite automata using local distinguishing experiments. It focuses on the theoretical aspects of the problem, such as how to combine copies of L fl for better performance, how to represent the states of the learned model using observable and hidden symbols, and how to create LDEs to reflect the distinct behaviors of the model states. The paper also provides a theoretical analysis of the algorithm's performance, showing that it can learn a model that is an *-approximation of the unknown machine with probability 1 in a polynomial number of actions. There is no mention of case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning in the text.
Theory. The paper discusses the validation problem in Alife models of evolution and ecosystems and proposes a method of validation through reference to ecological and evolutionary theory. The authors apply a series of ecological and evolutionary validation tests to a model of species diversification and validate the ecological and evolutionary dynamics in the model against theories of predation, competition, adaptation, and island biogeography.
Genetic Algorithms, Theory.   Genetic Algorithms is the most related sub-category as the paper discusses the relative importance of mutation and crossover in GAs and proposes an adaptive mechanism for controlling the use of crossover in an EA.   Theory is also relevant as the paper explores the behavior of the adaptive mechanism in different situations and presents an improvement to the mechanism. The paper also discusses the difficulty in deciding which form of crossover to use and the need for self-adaptive EAs.
Probabilistic Methods.   Explanation: The paper explores the use of probabilistic independence networks (PINs) as a framework for modeling hidden Markov models (HMMs) and related structures. The paper reviews the basic principles of PINs and shows how the well-known forward-backward (F-B) and Viterbi algorithms for HMMs are special cases of more general inference algorithms for arbitrary PINs. The paper also introduces examples of relatively complex models to handle sensor fusion and coarticulation in speech recognition, which are treated within the graphical model framework to illustrate the advantages of the general approach.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper analyzes perceptron learning using a Gibbs distribution on the set of realizable labelings of the patterns. The entropy of this distribution is an extension of the Vapnik-Chervonenkis (VC) entropy, which is a measure of the capacity of a learning algorithm to fit a given set of data.   Theory: The paper extends previous work on the capacity problem to finite temperature and provides a general framework for understanding the relationship between statistical physics and learning theory. The paper also discusses the relationship between the VC and Gardner entropies within the replica formalism.
Reinforcement Learning, Probabilistic Methods.   Reinforcement Learning is present in the text as the paper discusses the acquisition of a non-reactive mobot through a learning regime. The agent's behavior is shaped by the consequences of its actions, which is a key characteristic of reinforcement learning.   Probabilistic Methods are also present in the text as the paper discusses the use of a specific sequence of dynamic states to facilitate the acquisition of the behavior. This involves manipulating the probabilities of the agent encountering certain states in order to guide its learning.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms are present in the text as the paper discusses evolving a solution for the artificial ant problem. The method proposed in the paper involves using genetic algorithms to produce general behaviours for simulation environments.   Reinforcement Learning is also present in the text as the paper uses the concepts of training and testing from machine learning research to develop a consistent method for producing general solutions. The paper discusses how this method can be useful in producing general behaviours for simulation environments, which is a key aspect of reinforcement learning.
Probabilistic Methods.   Explanation: The paper discusses the use of Dynamic Bayesian Networks (DBNs) for speech recognition, which is a probabilistic method for representing complex stochastic processes. The paper also mentions the use of the EM algorithm for learning models with up to 500,000 parameters, which is a probabilistic method for estimating model parameters.
Neural Networks.   Explanation: The paper describes and evaluates multi-network connectionist systems composed of "expert" networks, which are preprocessed with a competitive learning network. The study assesses the effectiveness of this approach on different types of challenging problems, using previously developed measures of `diversity' for such systems. The paper shows that the automatic decomposition produces an effective set of specialist networks that can support a high level of performance. All of these aspects are related to neural networks, which are a sub-category of AI that involves the use of interconnected nodes to process information and learn from data.
Probabilistic Methods.   Explanation: The paper discusses the use of joint probability distributions to model the mapping between input and output variables, and the use of a set of data sampled from this distribution to identify a model of the data. The paper also discusses the use of a local risk criterion to measure the fit of the model to the system, and the use of a generalisation error to measure the performance of the model. These are all characteristic features of probabilistic methods in AI.
Probabilistic Methods, Theory  Probabilistic Methods: The paper discusses the use of Bayesian model selection, which is a probabilistic method for selecting the best model from a set of candidate models. The authors also use Bayesian model averaging to account for model uncertainty.  Theory: The paper presents theoretical results on the estimation and approximation error bounds for model selection. The authors derive upper bounds on the expected estimation error and approximation error, which can be used to guide the selection of the best model. The paper also discusses the trade-off between model complexity and generalization performance, which is a fundamental problem in machine learning theory.
Case Based, Reinforcement Learning  Explanation:  The paper primarily belongs to the sub-category of Case Based AI, as it presents a Case-Based Reasoning approach for optimization with changing criteria. The authors also mention the limitations of traditional Reinforcement Learning algorithms for repair-based optimization and propose a Case-Based Reasoning RL approach as a potential solution. Therefore, Reinforcement Learning is also a relevant sub-category.
Probabilistic Methods.   Explanation: The paper discusses the use of regularisation in the learning procedure, which is a common technique in probabilistic methods for machine learning. The paper also presents methods for estimating the best value of the regularisation parameter, which is a key aspect of probabilistic methods. While other sub-categories of AI may also be relevant to this paper, such as Neural Networks, the focus on regularisation and parameter estimation aligns most closely with Probabilistic Methods.
This paper belongs to the sub-category of AI called Case Based.   Explanation: The paper describes a recursive covering approach to local learning, which involves using a set of training examples to build a case base, and then using this case base to make predictions for new examples. The approach is based on the idea of similarity-based reasoning, where the similarity between a new example and the cases in the case base is used to determine the prediction. This is a key characteristic of case-based reasoning, which is a subfield of AI that focuses on using past experiences to solve new problems.
Genetic Algorithms.   Explanation: The paper describes a hybrid GP/GA approach to evolve both controllers and robot bodies to achieve behavior-specified tasks. This approach combines genetic programming (GP) and genetic algorithms (GA) to evolve the controllers and robot bodies. The paper also discusses the success of evolving controllers for robots using genetic approaches. Therefore, the paper is most related to the sub-category of AI known as Genetic Algorithms.
Probabilistic Methods, Neural Networks  The paper belongs to the sub-category of Probabilistic Methods as it discusses statistical learning and regularization for regression. The authors use probabilistic methods to estimate the parameters of the regression model and to make predictions. They also use regularization techniques to prevent overfitting.  The paper also belongs to the sub-category of Neural Networks as the authors use a neural network to model the system identification problem. They use a feedforward neural network with one hidden layer to estimate the output of the system. The authors also use regularization techniques to prevent overfitting in the neural network.
Probabilistic Methods. This paper belongs to the sub-category of Probabilistic Methods as it discusses the estimation of probabilities and standard deviations in time-series modelling. The paper also discusses statistical variable selection, which involves choosing a relevant subset of input variables in a regression problem based on performance measures such as prediction error.
This paper belongs to the sub-category of AI called Neural Networks.   Explanation: The paper discusses the use of lazy learning in language processing, which involves the use of neural networks to learn from data without explicitly defining rules or abstractions. The author argues that relying too heavily on abstraction can be harmful to the performance of language processing systems, and that lazy learning can be a more effective approach. The paper does not discuss any other sub-categories of AI.
Genetic Algorithms.   Explanation: The paper explicitly mentions the use of genetic programming to evolve the topology and sizing of the components in the analog electrical circuit. This falls under the category of genetic algorithms, which use evolutionary principles to optimize solutions to a problem.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms are present in the text as the approach presented for investigating the evolution of learning, planning, and memory uses Genetic Programming. The paper focuses on evolving functional or reactive programs using Genetic Programming.   Reinforcement Learning is present in the text as the approach uses a multi-phasic fitness environment that enforces the use of memory and allows the evolved programs to learn from their environment. The paper demonstrates the usefulness of the approach by using an illustrative problem of 'gold' collection, where the evolved programs store simple representations of their environments and use these representations to produce simple plans.
Genetic Algorithms.   Explanation: The paper specifically focuses on reviewing software environments for programming Genetic Algorithms (GAs). While other sub-categories of AI may be mentioned in passing, the main focus and content of the paper is on GAs.
Neural Networks.   Explanation: The paper discusses neural network pruning methods and their impact on generalization. It presents a new pruning method that adapts the pruning strength during training based on the evolution of weights and loss of generalization. The paper extensively experiments with 14 different problems to compare the performance of different pruning methods and early stopping. Therefore, the paper is primarily related to the sub-category of AI known as Neural Networks.
Probabilistic Methods.   Explanation: The paper compares and contrasts two probabilistic methods for classifier learning - Instance Based Learning (IBL) and Naive Bayes. It explores a framework for understanding the differences between these methods and conducts experiments to analyze their relative performance in different domains. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Theory refinement. This paper belongs to the sub-category of AI known as theory refinement, as it demonstrates how theory refinement techniques can be used to build student models for intelligent tutoring systems. The paper discusses the use of theory refinement to introduce errors into a knowledge base, rather than correcting them, in order to generate more accurate student models. The approach is evaluated through a comprehensive experiment involving a large number of students interacting with an automated tutor for teaching concepts in C++ programming.
Probabilistic Methods.   Explanation: The paper discusses the use of a belief state, which is a probability distribution over the state of a stochastic process, and the challenges of representing and reasoning with such distributions in complex processes. The paper proposes a method for maintaining a compact approximation to the true belief state, which is a probabilistic approach to inference. The paper also mentions dynamic Bayesian networks, which are a probabilistic graphical model used to represent complex stochastic processes.
Probabilistic Methods, Rule Learning, Theory.   Probabilistic Methods: The paper discusses how NARS uses a formal language with an experience-grounded semantics that consistently interprets various types of uncertainty. This approach is used in the system's induction capacity, which generates conclusions from common instances of terms and combines evidence from different sources.  Rule Learning: The paper focuses on the components of NARS that contribute to the system's induction capacity, which includes an induction rule that generates conclusions from common instances of terms and a revision rule that combines evidence from different sources. These rules are used to learn from experience and adapt to new situations.  Theory: The paper presents a new approach for induction from a non-axiomatic logical point of view. It discusses the semantic foundation that underlies NARS's induction, deduction, and abduction capabilities, and how these types of inference cooperate in the system's activities. The paper also discusses the system's control mechanism, which enables knowledge-driven, context-dependent inference.
Case Based, Constraint Satisfaction  Explanation:  - Case Based: The paper discusses the limitations of traditional Case-Based Reasoning (CBR) and proposes a combination of CBR with Constraint Satisfaction techniques for design. The paper also describes the synergy and commonality that emerged as they combined the two methodologies.  - Constraint Satisfaction: The paper proposes a combination of Constraint Satisfaction techniques with Case-Based Reasoning for design, and describes the unexpected synergy and commonality between the two approaches. The paper also discusses their continuing and future work on exploiting the emergent synergy when combining these reasoning modes.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper discusses the use of kernel functions, which are commonly used in Support Vector methods, to combat the curse of dimensionality in the dual version of Ridge Regression. Kernel functions are a probabilistic method used to map data into a higher dimensional space.  Theory: The paper introduces a regression estimation algorithm that combines the dual version of Ridge Regression with the ANOVA enhancement of infinite-node splines. The ANOVA decomposition method is a theoretical framework used to decompose a function into additive components, and it is used in this paper to construct a family of kernel functions. The paper also discusses the performance of the algorithm relative to other algorithms, which is a theoretical evaluation.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses the use of a network of BCM neurons in training for orientation selectivity and ocular dominance.   Rule Learning: The paper specifically mentions the use of the BCM rule in the training of the neural network.
Probabilistic Methods.   Explanation: The paper discusses Bayesian estimation techniques for the von Mises distribution, which is a probability distribution. The focus is on examining the posterior distribution in both polar and Cartesian co-ordinates, which is a probabilistic approach. The paper compares different Bayesian and Classical estimators, which are all probabilistic methods for estimating parameters of a distribution.
Case Based, Rule Learning, Theory.   Case Based: The paper discusses the use of analogies in design, which involves drawing on past experiences and cases to inform current problem-solving. The authors also reference previous research on case-based reasoning in design.   Rule Learning: The paper discusses the use of rules and heuristics in design, such as the use of design patterns and guidelines. The authors also discuss the importance of learning and adapting rules based on feedback and experience.   Theory: The paper presents a theoretical framework for understanding creativity in design, drawing on concepts from cognitive psychology and philosophy. The authors also discuss the role of theory in guiding and evaluating design practices.
Theory.   Explanation: The paper discusses the theoretical foundations of support vector machines and reproducing kernel Hilbert spaces, and proposes a new method for selecting the optimal hyperparameters using the randomized GACV. There is no mention of any practical implementation or application of AI techniques such as neural networks, reinforcement learning, or genetic algorithms.
Genetic Algorithms, Neural Networks.   Genetic algorithms are directly used in the proposed technique called Addemup to search for an accurate and diverse set of trained networks. The paper also focuses on the use of neural-network ensembles, which have been shown to be very accurate classification techniques.
The paper belongs to the sub-categories of AI: Symbolic Induction Methods, Regression Methods, and Neural Networks.   Symbolic Induction Methods: The paper compares six classifier induction algorithms, including decision trees and the Model Class Selection system, which are examples of symbolic induction methods.   Regression Methods: The paper also includes linear regression and logistic regression as two of the six algorithms compared, which are examples of regression methods.   Neural Networks: The paper includes neural nets as one of the six algorithms compared, which is an example of neural networks.
Genetic Algorithms.   Explanation: The paper discusses the use of genetic algorithms (GAs) and their reproduction operators, specifically the multi-parent diagonal and scanning crossover, to obtain an adjustable arity and graded feature of sexuality. The objective is to investigate the performance of GAs on Kauffman's NK-landscapes with varying extents of sexuality used for reproduction. The paper presents results that confirm the superiority of sexual recombination on mildly epistatic problems. Therefore, the paper belongs to the sub-category of AI known as Genetic Algorithms.
Probabilistic Methods.   Explanation: The paper discusses the problem of determining the number of constituent groups (components or classes) that best describes some data, which is a common problem in unsupervised learning. The paper applies the Minimum Message Length (MML) criterion to this problem, which is a probabilistic method for model selection. The paper also compares the MML criterion with other criteria prominent in the literature for estimating the number of components in a data set. Therefore, the paper belongs to the sub-category of Probabilistic Methods in AI.
Probabilistic Methods, Rule Learning  Probabilistic Methods: The paper discusses the use of Bayesian networks to represent physical and design knowledge in innovative design. Bayesian networks are a type of probabilistic graphical model that can be used to represent uncertain relationships between variables.  Rule Learning: The paper also discusses the use of rule-based systems to represent design knowledge. Rule-based systems use a set of if-then statements to represent knowledge and make decisions. The paper describes how these rules can be learned from examples using machine learning techniques.
Probabilistic Methods.   Explanation: The paper discusses the use of the Minimum Message Length (MML) technique for estimating the parameters of a multivariate Gaussian model with a single common factor. MML is a probabilistic method that seeks to minimize the length of the message required to describe the data and the model, and it is compared to Maximum Likelihood (ML) analysis. The paper also discusses the conditions for the existence of an MML estimate, which is based on the log likelihood ratio.
Rule Learning, Theory.   The paper discusses the development and implementation of Mode-Directed Inverse Entailment (MDIE) as a generalization and enhancement of previous approaches for inverting deduction. This involves learning from positive data and inverting implication between pairs of clauses, which falls under the sub-category of Rule Learning. The paper also provides a re-assessment of previous techniques in terms of inverse entailment, which is a theoretical aspect of AI.
Rule Learning, Theory.   Explanation:  - Rule Learning: The paper discusses a modification of a first-order learning system to specialize in finding definitions of functional relations. This involves the system learning rules or clauses that define the relation based on examples and background information.  - Theory: The paper is focused on developing a theoretical understanding of how to improve first-order learning for functional relations, and presents experimental results to support the proposed approach.
Neural Networks.   Explanation: The paper focuses on using AdaBoost to improve the performance of neural networks for character recognition tasks. While other sub-categories of AI may also be relevant to this task, such as probabilistic methods or rule learning, the primary focus of the paper is on neural networks.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses experiments conducted using neural networks to solve the problem of finding genes in DNA.   Rule Learning: The paper discusses the use of decision trees, which are a type of rule learning algorithm, to solve the same problem. The paper also discusses the ability of constructive induction to change the representation of the problem by constructing new features, which can be seen as a form of rule learning.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms are present in the text as the paper discusses the use of evolutionary algorithms, which includes genetic algorithms, in robotics. The paper explains how genetic algorithms can be used to optimize robot behavior and design.   Reinforcement Learning is also present in the text as the paper discusses how robots can learn through trial and error using reinforcement learning. The paper explains how reinforcement learning can be used to train robots to perform tasks and improve their performance over time.
Theory.   Explanation: The paper presents exact learning algorithms for several classes of (discrete) boxes in high dimensions, and discusses the learnability of these classes. The focus is on theoretical analysis and complexity bounds, rather than on practical implementation or application of AI techniques such as neural networks or reinforcement learning.
Rule Learning.   Explanation: The paper discusses the development and capabilities of Foidl, an inductive logic programming (ILP) system that uses decision lists and implicit negatives to learn correct programs from examples. ILP is a subfield of machine learning that focuses on learning rules or logical expressions from examples, making this paper most closely related to the Rule Learning sub-category of AI.
Neural Networks.   Explanation: The paper explicitly mentions studying controllability properties of recurrent neural networks, and the contributions made in the paper are related to this specific type of neural network.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses the use of Bayesian networks to model the relationships between variables in the game of Go. The authors also use probabilistic methods to calculate the probability of winning a game based on the current board state.  Reinforcement Learning: The paper describes the use of reinforcement learning to train the neural network to play Go. The authors use a combination of supervised and reinforcement learning to improve the performance of the network.
Probabilistic Methods.   Explanation: The paper discusses the use of hidden Markov models (HMMs), which are a type of probabilistic model, for modeling and classifying dynamic behaviors in vision tasks. The paper specifically focuses on coupled HMMs, which provide an efficient way to model interacting processes and offer superior training speeds, model likelihoods, and robustness to initial conditions. The paper does not discuss any other sub-categories of AI.
Reinforcement Learning.   Explanation: The paper discusses the framework of reinforcement learning and proposes a solution to the temporal credit assignment problem in this context. The authors argue against the use of discounted rewards and propose an alternative approach to address the effect of noise and explain the parameters involved in the learning process. The empirical results presented in the paper also demonstrate the effectiveness of the proposed solution in the context of reinforcement learning.
Genetic Algorithms, Theory  Explanation:   1. Genetic Algorithms: The paper mentions the use of "parallel high-level genetic algorithms" for generating good solutions for perimeter minimization problems. This falls under the sub-category of Genetic Algorithms in AI.  2. Theory: The paper presents a theoretical framework for solving optimization problems involving the assignment of grid cells to processors. It develops a lower bound on the perimeter of a tile as a function of its area and shows how to generate minimum-perimeter tiles. This falls under the sub-category of Theory in AI.
Reinforcement Learning, Neural Networks.   Reinforcement Learning is present in the paper as the authors apply TD() with value function approximation to the task of job-shop scheduling. They use a one-step lookahead greedy algorithm using the learned evaluation function to outperform the best existing algorithm for this task.   Neural Networks are present in the paper as the authors approximate the value function using a 2-layer feedforward network of sigmoid units. They use this approximation to improve the performance of the scheduling algorithm.
Genetic Algorithms.   Explanation: The paper focuses on exploring the mechanisms of convergence of genetic algorithms and uses metrics to measure their performance. The study also looks at the effects of increasing nonlinearity of functions on the convergence behavior of a simple genetic algorithm. While other sub-categories of AI may be indirectly related to the study, genetic algorithms are the primary focus and the most relevant sub-category.
Rule Learning, Theory.   The paper belongs to the sub-category of Rule Learning because it describes a new algorithm for learning goal-decomposition rules (d-rules) using inductive logic programming techniques. The d-rules are first order and are learned through a "generalize-and-test" approach.   The paper also belongs to the sub-category of Theory because it discusses the pedagogic technique of teaching problem-solving through exercises and how this approach can be applied to acquire search-control knowledge in the form of d-rules. The paper presents a theoretical framework for learning d-rules and demonstrates its feasibility through application in two planning domains.
Reinforcement Learning.   Explanation: The paper focuses on the application of reinforcement learning to improve the performance of foveal visual attention in a simulated vision system. The authors demonstrate that RL significantly improves the system's ability to recognize targets with fewer fixations by learning strategies for the acquisition of visual information relevant to the task and generalizing these strategies in ambiguous and unexpected scenario conditions. Therefore, the paper belongs to the sub-category of Reinforcement Learning in AI.
Rule Learning.   Explanation: The paper is focused on learning non-recursive, function-free first-order Horn definitions, which are a type of logical rule. The paper discusses how this class of rules can be learned using equivalence and membership queries, which are common techniques in rule learning. The results of the paper are also shown to be applicable to learning efficient goal-decomposition rules in planning domains, further emphasizing the focus on rule learning.
Rule Learning, Theory.   Explanation: The paper describes a system that learns goal decomposition rules from examples and membership queries, which falls under the category of rule learning. Additionally, the paper emphasizes the importance of theory-guided empirical learning, which suggests a focus on theoretical principles and concepts, placing it under the category of theory.
Theory.   Explanation: The paper presents a theoretical approach to the problem of global stabilization of linear systems subject to control saturation. It derives general theorems and applies them to a specific example of longitudinal flight control for an F-8 aircraft. The paper does not involve any practical implementation of AI techniques such as neural networks, genetic algorithms, or reinforcement learning.
Reinforcement Learning, Rule Learning  The paper belongs to the sub-category of Reinforcement Learning as it discusses the use of a model of the environment to avoid local learning in reinforcement learning algorithms. The paper also belongs to the sub-category of Rule Learning as it proposes a method for learning rules to avoid local learning in reinforcement learning algorithms. The proposed method involves using a set of rules to guide the exploration of the environment and avoid getting stuck in local optima.
Case Based, Explanation-Based Learning  Explanation: The paper is primarily focused on improving the performance of a case-based planner, dersnlp, by detecting and explaining case failures. The use of a case library and the retrieval of previous cases are key components of case-based reasoning. Additionally, the paper utilizes explanation-based learning techniques to construct the reasons for case failures. Therefore, the paper belongs to the sub-category of Case Based AI.
Probabilistic Methods.   Explanation: The paper describes a mixture model for supervised learning of probabilistic transducers and devises an on-line learning algorithm that efficiently infers the structure and estimates the parameters of each probabilistic transducer in the mixture. Theoretical analysis and comparative simulations indicate that the learning algorithm tracks the best transducer from an arbitrarily large (possibly infinite) pool of models. The paper also presents an application of the model for inducing a noun phrase recognizer. There is no mention of Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory in the text.
Neural Networks.   Explanation: The paper discusses a competitive learning network that uses neural plasticity to mediate the competitive interaction between nodes. The paper also mentions an algorithm for feature extraction that uses binary information gain optimization, which is a common technique in neural network applications.
Theory.   Explanation: This paper focuses on the theoretical computation of the induced L2 norm of single input linear systems with saturation. It does not involve any practical implementation or application of AI techniques such as neural networks, reinforcement learning, or probabilistic methods. Therefore, the paper belongs to the sub-category of AI theory.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses the use of Bayesian training, which is a probabilistic approach to training neural networks. The Hybrid Monte Carlo method is used to approximate the true predictive distribution for a test case given a set of training cases, which is a probabilistic concept. The paper also mentions the approximation of the posterior weight distribution by a Gaussian, which is a common probabilistic method used in Bayesian neural networks.  Neural Networks: The paper is primarily focused on training backpropagation neural networks using Bayesian methods. The Hybrid Monte Carlo method is used to perform Bayesian training of these networks. The paper also discusses the automatic scaling of weight factors, which is a common technique used in neural network training.
Rule Learning, Case Based.   Rule Learning is present in the text as the paper proposes the development of a generic software tool that can be adjusted and extended incrementally based on the content of former layouts. This tool would use rules to formalize the know-how of architects and reuse it in new layouts.   Case Based is also present in the text as the paper focuses on the indexing, retrieval, and reuse of former layouts, which can be seen as cases. The proposed tool would use these cases to learn and apply the know-how of architects in new layouts.
Neural Networks.   Explanation: The paper presents a new approach to prognostic prediction using a neural architecture. The technique is applied to breast cancer prognosis, resulting in flexible, accurate models. There is no mention of any other sub-category of AI in the text.
Genetic Algorithms.   Explanation: The paper presents a new approach for Genetic Algorithms (GAs) by adapting the mutation rate during the search process, which is a key component of GAs. The paper also compares the approach with Evolution Strategies (ESs), which is another type of evolutionary algorithm. Therefore, the paper is primarily focused on GAs and their improvement through self-adaptation.
Theory  Explanation: The paper presents a new interactive model of teaching in the learning theory community, and analyzes its power and efficiency compared to previous teaching models. The focus is on theoretical analysis rather than practical implementation or application of specific AI techniques.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper presents an algorithm called Addemup that uses genetic algorithms to explicitly search for a highly diverse set of accurate trained networks. The algorithm creates an initial population and uses genetic operators to continually create new networks, keeping the set of networks that are highly accurate while disagreeing with each other as much as possible.   Neural Networks: The paper discusses the use of neural-network ensembles, which is a technique where the outputs of a set of separately trained neural networks are combined to form one unified prediction. The paper presents an algorithm that uses genetic algorithms to create a set of highly accurate and diverse neural networks for an effective ensemble. The experiments conducted in the paper also show that the proposed algorithm is able to generate a set of trained networks that is more accurate than several existing ensemble approaches.
Probabilistic Methods, Rule Learning, Theory.   Probabilistic Methods: The paper discusses the limitations of default logic in representing common sense reasoning tasks and proposes a quantitative counterpart called sequential thresholding, which takes into account the importance of context in constructing a non-monotonic extension. This approach involves assigning probabilities to different rules based on the context, which is a probabilistic method.  Rule Learning: The paper discusses the formulation of modular default rules and argues that they should not be presumed to work in all or most circumstances. Instead, the importance of context in reasoning tasks should be taken into account. This approach involves learning rules based on the context, which is a form of rule learning.  Theory: The paper presents a semantic characterization of generic non-monotonic reasoning and provides a link between default logic and sequential thresholding. This theoretical framework helps to integrate the two mechanisms and can be beneficial to both.
Reinforcement Learning, Probabilistic Methods, Theory.   Reinforcement learning is the main focus of the paper, as it describes algorithms for making optimal decisions in a generalized model that subsumes Markov decision processes. The paper also utilizes probabilistic methods, such as the stochastic-approximation theorem, to prove convergence of the algorithms. Finally, the paper falls under the category of theory as it develops generalizations of value iteration, policy iteration, model-based reinforcement-learning, and Q-learning.
Theory.   Explanation: The paper presents a learning algorithm that implements tree-structured bias and provides theoretical predictions that are empirically validated. The focus is on the theoretical aspect of incorporating prior knowledge into learning, rather than on specific AI sub-categories such as neural networks or reinforcement learning.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper introduces a family of Boltzmann machines that can be trained using standard gradient descent. The networks can have one or more layers of hidden units, with tree-like connectivity.   Probabilistic Methods: The paper discusses the implementation of the supervised learning algorithm for these Boltzmann machines exactly, without resort to simulated or mean-field annealing. The stochastic averages that yield the gradients in weight space are computed by the technique of decimation.
Rule Learning, Probabilistic Methods, Neural Networks.   Rule Learning is present in the text as the paper aims to learn rules for dispatching technicians based on the data describing resolutions to telephone network local loop "troubles."   Probabilistic Methods are present in the text as the data describing resolutions to telephone network local loop "troubles" are notoriously unreliable, and the paper describes four different approaches to dealing with the problem of "bad" data.   Neural Networks are present in the text as the paper offers evidence that machine learning can help to build a dispatching method that will perform better than the system currently in place. Neural networks are a type of machine learning algorithm that can be used for classification tasks such as dispatching technicians.
Probabilistic Methods, Rule Learning, Theory.   Probabilistic Methods: The paper discusses the decomposition of prediction error into its natural components, which involves probabilistic methods such as conditional probabilities and expected values.  Rule Learning: The paper discusses the error behavior of a classifier, which is a type of rule learning algorithm.  Theory: The paper presents a theoretical framework for understanding the concepts of bias, variance, and prediction error in the context of classification rules. It also derives bootstrap estimates of these components, which are based on statistical theory.
Theory.   Explanation: The paper deals with system-theoretic concepts and provides results on observability and minimal realizations. While the paper mentions neural network theory, it does not focus on the application of neural networks or any other AI sub-category.
Case Based.   Explanation: The paper presents a case-based retrieval system called REPRO that supports chemical process design. The paper extensively discusses the case representation and structural similarity measure, which are crucial problems in case-based reasoning. The experimental results and expert evaluation also demonstrate the usefulness of the system in real-world problems. Therefore, the paper belongs to the sub-category of AI known as Case-Based.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper discusses the use of genetic algorithms and specifically focuses on the effectiveness of crossover operators in genetic programming.   Neural Networks: The paper also discusses the use of genetic algorithms in designing neural network modules and their control circuits. The effectiveness of crossover operators is evaluated in this context.
Genetic Algorithms, Rule Learning.   Genetic Algorithms (GAs) are the main focus of the paper, as the authors explore the use of GAs to construct a system called GABIL that continually learns and refines concept classification rules from its interaction with the environment. The paper also discusses the performance of GABIL compared to other concept learners, and how GABIL is enhanced by allowing the GAs to adaptively select the appropriate strategies.   Rule Learning is also present in the paper, as the authors identify strategies responsible for the success of concept learners and implement a subset of these strategies within GABIL to produce a multistrategy concept learner. The paper also discusses how GABIL continually learns and refines concept classification rules from its interaction with the environment.
Theory   Explanation: The paper discusses the theoretical issue of overfitting in the context of selecting a hypothesis from a set of hypotheses using cross-validation data. It proposes a new algorithm based on leave-one-out cross-validation to address this issue. While the paper mentions the use of a randomized learning algorithm to generate the set of hypotheses, it does not focus on any specific sub-category of AI such as neural networks or genetic algorithms.
Theory.   Explanation: The paper focuses on investigating learning with membership and equivalence queries assuming incomplete information, and presents algorithms to learn monotone k-term DNF with membership queries only, and to learn monotone DNF with membership and equivalence queries. The paper does not discuss or apply any of the other sub-categories of AI listed in the question.
Theory.   Explanation: The paper presents a theoretical framework for hybrid systems, which combines finite automata and linear systems. It does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Theory.   Explanation: This paper presents new characterizations of the Input to State Stability property and shows the equivalence between the ISS property and several variations proposed in the literature. It does not involve any specific AI techniques or algorithms, but rather focuses on theoretical concepts and properties.
Theory.   Explanation: This paper presents a theoretical approach, specifically a Successive Linear Programming (SLP) approach, for solving the initialization problem of differential algebraic equations (DAEs). The paper does not involve any application of case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning. Rule learning is also not applicable in this context.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms (GAs) are mentioned in the abstract as one of the efficient and promising tools in the field of optimization and machine learning techniques. The Evolving Non-Determinism (END) model is also described as proposing an inventive way to explore the space of states, which is a key feature of GAs. The END model is then applied to the sorting network problem and Solitaire game, where it is able to evolve solutions and strategies through a process of simulated co-evolution, which is a common technique used in GAs.  Reinforcement Learning is also present in the text, as the END model is described as using simulated co-evolution to remedy some drawbacks of previous techniques. This can be seen as a form of reinforcement learning, where the model is learning from feedback in the form of fitness scores and adjusting its behavior accordingly. Additionally, the END model is able to evolve a strategy for the Solitaire game that is comparable to a human-designed strategy, which is another example of reinforcement learning.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms (GAs) are mentioned in the abstract as one of the efficient and promising tools in the field of optimization and machine learning techniques. The END model presented in the paper proposes an inventive way to explore the space of states using simulated co-evolution of organisms, which remedies some drawbacks of previous techniques like GAs.   Reinforcement Learning is present in the paper as the END model is applied to the Solitaire game, where it evolved a strategy comparable to a human-designed strategy. Reinforcement learning is a subfield of machine learning that deals with how an agent can learn to take actions in an environment to maximize a cumulative reward. The END model's ability to evolve a strategy for the Solitaire game is an example of reinforcement learning.
Case Based, Rule Learning  Explanation:   - Case Based: The paper discusses the use of cases as a basis for a solution and how they can indicate the boundaries within which a solution can be found. The system implemented in the domain of personal income tax planning, chiron, is an example of a case-based reasoning system.  - Rule Learning: The paper discusses how the system uses cases to find a range of acceptable answers and then chooses a point within those boundaries to solve the problem. This process involves learning rules from the cases and applying them to new situations.
Genetic Algorithms.   Explanation: The paper explicitly mentions the use of genetic programming, which is a subfield of genetic algorithms. The paper describes the use of genetic programming to develop image processing software for detecting signs of breast cancer. The paper also discusses program optimizations to speed up the evolution process of the genetic programming system.
Neural Networks.   Explanation: The paper proposes two algorithms for constructing and training feedforward networks of linear threshold units, which are a type of neural network. The paper also compares the performance of these algorithms with alternative procedures derived from similar strategies, which are also related to neural networks.
Probabilistic Methods.   Explanation: The paper discusses the naive Bayesian classifier and Bayesian tree learning algorithm, both of which are probabilistic methods used in machine learning. The proposed algorithm, lazy Bayesian tree learning, is also a probabilistic method that builds a most appropriate Bayesian tree for each test example.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents an approach that learns the conditional probabilities on a Bayesian network with hidden variables by transforming it into a multi-layer feedforward neural network (ANN). The weights in the ANN are then learned using standard backpropagation techniques.   Probabilistic Methods: The paper deals with the problem of learning Bayesian networks with hidden variables, which is a probabilistic method. The approach presented in the paper maps the conditional probabilities onto weights in a neural network, which is a probabilistic method as well. The paper focuses on Bayesian networks with noisy-or and noisy-and nodes, which are probabilistic methods.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods: The paper discusses the use of machine learning methods for model calibration, which can be viewed as a form of supervised learning in the presence of prior knowledge. The process of calibration involves setting free parameters to optimize the predictive accuracy of the model, which can be seen as a probabilistic approach to learning.  Rule Learning: The paper describes a divide-and-conquer approach to calibrating the model, in which subsets of the parameters were calibrated while others were held constant. This approach was made possible by carefully selecting training sets that exercised only portions of the model and by designing error functions for each part that had desirable properties. This can be seen as a form of rule learning, where the rules are based on the structure of the model and the constraints introduced by the prior knowledge.
Genetic Algorithms.   Explanation: The paper describes the use of evolutionary algorithms, specifically the SAMUEL genetic learning system, to explore alternative robot behaviors within a simulation model. This falls under the category of genetic algorithms, which are a type of evolutionary computation technique.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper discusses the 0-1 loss function with categorical random variables, which is a probabilistic method used in classification techniques.  Theory: The paper explores the concepts of variance and bias and develops a decomposition of the prediction error into functions of the systematic and variable parts of our predictor. It also discusses the various definitions that have been proposed, which is a theoretical aspect of the topic.
Case Based, Theory  Explanation:  - Case Based: The paper is about retrieving relevant cases in case-based reasoning systems. - Theory: The paper presents a formal definition of context-based similarity and discusses historical background on research in similarity assessment.
Probabilistic Methods, Theory.   The paper introduces and describes the AdaBoost algorithm, which is a probabilistic method for reducing error in learning algorithms. The paper also discusses the related notion of a pseudo-loss, which is a theoretical concept for forcing a learning algorithm to focus on the hardest-to-discriminate labels. The experiments conducted in the paper assess the performance of AdaBoost with and without pseudo-loss on real learning problems, providing empirical evidence to support the theoretical claims made about the algorithm. Overall, the paper is focused on the theoretical and practical aspects of boosting as a probabilistic method for improving learning algorithms.
Neural Networks, Theory.   Neural Networks: The paper discusses the quantization of parameters of a Perceptron, which is a type of neural network. The learning algorithms presented in the paper are designed to maximize the robustness of the Perceptron, which is a property of neural networks used as classifiers.  Theory: The paper presents efficient learning algorithms for the maximization of the robustness of a Perceptron, which involves tackling the combinatorial problem arising from the discrete weights. The paper also discusses the central problem of quantization of parameters in hardware implementation of neural networks using a numerical technology.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the use of variants of the Bayesian classifier, which is a probabilistic method, to extract diagnostic knowledge from medical databases. The authors also mention the use of fuzzy discretization of numerical attributes, which is a probabilistic technique.  Rule Learning: The paper discusses the use of the Assistant algorithm for top-down induction of decision trees, which is a rule learning technique. The authors also mention the use of expert-defined diagnostic rules as pre-classifiers or generators of additional training instances for injuries with few training examples.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses a machine learning task of model calibration, which involves supervised learning from examples in the presence of prior knowledge. This is a form of probabilistic modeling, where the goal is to optimize the accuracy of the model for making future predictions.  Theory: The paper presents a new divide-and-conquer method for solving the model calibration task, which involves analyzing the model to identify a series of smaller optimization problems whose sequential solution solves the global calibration problem. This approach is based on theoretical principles of efficient learning from prior knowledge, and the paper argues that such methods will be required for agents with large amounts of prior knowledge to learn efficiently.
Neural Networks.   Explanation: The paper focuses on the use of neural network models for identification and control of nonlinear systems. The authors discuss the design and stability analysis of these models, and provide examples of their application. While other sub-categories of AI may also be relevant to this topic, such as reinforcement learning or probabilistic methods, the primary focus of the paper is on neural networks.
Rule Learning.   Explanation: The paper describes Dlab, a formalism for defining and traversing finite subspaces of first order clausal logic, which can be used in inductive learning systems to learn concepts. This is a form of rule learning, where the system learns rules or logical statements that describe the relationships between different variables or features. The paper does not discuss any of the other sub-categories of AI listed.
Neural Networks.   Explanation: The paper compares the representational capabilities of one hidden layer and two hidden layer nets consisting of feedforward interconnections of linear threshold units. It discusses the use of neural networks for classification and control problems, and provides a general result showing that nonlinear control systems can be stabilized using two hidden layers, but not in general using just one.
Reinforcement Learning, Learning Theory.   Reinforcement learning is present in the paper as the proposed framework involves autonomous systems that learn and discover from their environment. This is a key characteristic of reinforcement learning, where an agent learns to take actions in an environment to maximize a reward signal.   Learning theory is also present in the paper as the framework is based on the idea of autonomous learning from the environment. The paper proposes a coherent way to integrate various intelligent activities involved in a discovery process, which can be seen as a theoretical framework for learning from the environment.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper compares Support Vector Machines to radial basis function networks, which are a type of neural network.   Probabilistic Methods: The paper discusses two different cost functions for Support Vectors, one of which is Huber's robust loss function, which is a probabilistic method for dealing with outliers in data.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper evaluates the classification accuracy of three neural network classifiers for fingerprint and OCR applications. The multilayer perceptron, radial basis function, and probabilistic neural networks were used for the evaluation.   Probabilistic Methods: The paper also evaluates the classification accuracy of four statistical classifiers, including the normal and k-nearest neighbor classifiers. The best accuracy obtained for both problems was provided by the probabilistic neural network.
Theory.   Explanation: This paper presents a theoretical approach to the problem of trajectory tracking in the presence of input constraints. It derives necessary conditions for the reparameterizing function and formulates the problem as an optimal control problem. There is no mention of any AI techniques such as neural networks, genetic algorithms, or reinforcement learning being used in the paper.
Genetic Algorithms.   Explanation: The paper explicitly mentions Genetic Programming as the technique used for detecting cliques in a network. The paper also discusses the implications of the clique detection problem to the Strongly Typed Genetic Programming paradigm. Therefore, Genetic Algorithms is the most related sub-category of AI to this paper.
This paper belongs to the sub-category of Case Based AI.   Explanation: The paper focuses on developing an adaptive similarity assessment method for case-based explanation. It discusses the use of case-based reasoning (CBR) and how it can be improved by incorporating adaptive similarity assessment. The paper does not mention any other sub-categories of AI such as Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Case Based, Rule Learning  Explanation:   The paper describes a new approach to acquiring case adaptation knowledge in case-based reasoning (CBR) by initially solving adaptation problems using abstract rules and general memory search heuristics, and then storing successful adaptation episodes as cases for future use. This approach combines both rule learning and case-based reasoning, making it relevant to both sub-categories of AI.
Theory. The paper introduces the Inferential Theory of Learning, which provides a conceptual framework for explaining the logical capabilities of learning strategies. The theory postulates that learning is a process of modifying the learner's knowledge by exploring their experience, and that this process can be described as a search in a knowledge space guided by learning goals and using knowledge transmutations. The paper also outlines a multistrategy task-adaptive learning methodology that aims to integrate a range of inferential learning strategies. None of the other sub-categories of AI are directly present in the text.
Neural Networks, Theory.   Neural Networks: The paper discusses the Support Vector machine, which is a type of learning machine that contains neural networks as a special case. The comparison is made between the Support Vector machine and a classical approach that uses error backpropagation, which is a common training method for neural networks.  Theory: The paper is based on statistical learning theory and discusses the theoretical foundations of the Support Vector machine. The authors also mention that the SV approach is not only theoretically well-founded but also superior in practical applications.
Rule Learning, Theory.   The paper describes a method for inducing and pruning ensembles of decision stumps, which are a type of rule-based classifier. The approach is based on a hill-climbing procedure, which is a type of search algorithm commonly used in rule learning. The paper also discusses the trade-off between predictive accuracy and intelligibility, which is a theoretical issue in the field of machine learning.
Theory. This paper belongs to the sub-category of AI theory. The paper presents a new concept called integral input to state stability (iiss) and provides a necessary and sufficient characterization of the iiss property expressed in terms of dissipation inequalities. The paper does not involve any application of AI techniques such as neural networks, genetic algorithms, or reinforcement learning.
Probabilistic Methods.   Explanation: The paper discusses the use of graphs to represent independence structure in multivariate probability models, and how this approach has been pursued across various research disciplines such as probabilistic expert systems, statistical physics, image analysis, genetics, decoding of error-correcting codes, Kalman filters, and speech recognition with Markov models. The paper also mentions belief networks, hidden Markov models, and Markov random fields, which are all probabilistic methods.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of Bayes' theorem and Jeffrey's rule in probability-based reasoning systems for belief revision.   Theory: The paper presents a theoretical analysis of the limitations of the Bayesian approach to belief revision, and distinguishes between belief revision and belief updating. It also discusses the information needed for the operation of revision in its general form.
Probabilistic Methods, Theory.   Probabilistic Methods: The Non-Axiomatic Logic defined in the paper can uniformly represent and process randomness, fuzziness, and ignorance. This suggests the use of probabilistic methods in the logic.   Theory: The paper defines three binary term logics and uses them to define a Non-Axiomatic Logic. It also discusses the relations between these logics and Aristotle's syllogistic logic. This indicates a focus on theoretical aspects of AI.
Probabilistic Methods.   Explanation: The paper describes a probabilistic algorithm (PAO) for finding an optimal derivation strategy based on conditional probabilities of successful database retrievals. The paper also discusses how to obtain these strategies in polynomial time for certain classes of graphs. There is no mention of any of the other sub-categories of AI listed.
Probabilistic Methods, Rule Learning, Theory.   Probabilistic Methods: The paper mentions "several types of uncertainties" that can be represented and processed in the system. This suggests that the system uses probabilistic reasoning to handle uncertain information.  Rule Learning: The paper describes the system as using an extended syllogism, which involves the use of rules to make deductions. The system also carries out abduction, which involves generating hypotheses based on observed data.  Theory: The paper presents the Non-Axiomatic Reasoning System as a new form of term logic that unifies deduction, induction, abduction, and revision. The paper also discusses the dynamic organization of the system's memory, which can be interpreted as a network. These concepts are all related to the theoretical foundations of AI.
Probabilistic Methods.   Explanation: The paper discusses various approaches for dealing with uncertainty in artificial intelligence, and specifically mentions that "several approaches have been suggested and studied for dealing with various types of uncertainty." The paper then introduces a new approach, the Non-Axiomatic Reasoning System, which is designed to handle uncertainty in situations where the system's knowledge and resources are insufficient. The paper also compares the new approach with previous approaches in terms of uncertainty representation and interpretation. All of these aspects are related to probabilistic methods, which involve representing uncertainty using probability distributions and carrying out operations on these distributions.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses the problem of identifying the underlying switching process in multi-stationary time series, which is a probabilistic problem. The authors propose using nonlinear gated experts with simulated annealing to perform the segmentation and system identification of the time series. Simulated annealing is a probabilistic optimization algorithm that is used to find the global minimum of a function.  Neural Networks: The paper proposes using nonlinear gated experts to perform the segmentation and system identification of the time series. Nonlinear gated experts are a type of neural network that consists of multiple experts, each of which is responsible for modeling a different part of the input space. The gating network decides which expert to use for a given input.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes a multiple scale neural system for synthetic aperture radar (SAR) processing. The system consists of multiple layers of neural networks that process the SAR data at different scales to extract boundary and surface information. The authors also discuss the architecture and training of the neural system.  Probabilistic Methods: The paper discusses the use of probabilistic methods, specifically Markov random fields, for representing the extracted boundary and surface information. The authors explain how the probabilistic model can be used to improve the accuracy of the boundary and surface representation.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper discusses the creation of disjunctive concept definitions, which is a common approach in rule learning. The paper also discusses the problem of small disjuncts, which is a well-known issue in rule learning.   Probabilistic Methods are present in the text as the paper investigates the impact of noise on learning. Probabilistic methods are commonly used to model uncertainty and noise in data, and the paper discusses how noise affects learning in two different domains.
Reinforcement Learning, Probabilistic Methods  Explanation:   This paper belongs to the sub-category of Reinforcement Learning because it discusses the use of a reinforcement learning algorithm called Q-learning to adapt the parameters of a dynamic system. The authors propose a method for stabilizing the adaptation process by introducing a probabilistic component to the Q-learning algorithm.   Additionally, the paper also belongs to the sub-category of Probabilistic Methods because it uses probabilistic modeling to estimate the uncertainty in the Q-values and to adjust the exploration-exploitation trade-off during the adaptation process. The authors use a Bayesian approach to estimate the posterior distribution of the Q-values and to compute the expected value of the Q-function.
Rule Learning.   Explanation: The paper discusses the construction of a rule for predicting future responses based on a training set of data, and focuses on estimating the error rate of this rule using cross-validation and bootstrap methods. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Theory.
Case Based, Memory-Based Learning - The paper discusses the application of Memory-Based Learning (MBL) to fast NP chunking, which is a sub-category of Case Based AI. The authors specifically use a fast decision tree variant of MBL (IGTree) and a cascaded classifier architecture to improve accuracy and speed.
Theory  Explanation: The paper discusses a new approach to concept learning that addresses consistency directly, rather than sacrificing it for simplicity or other goals. The focus is on developing a theoretical understanding of how tightly hypotheses should fit the training data for different problems. The paper does not discuss any specific AI sub-category such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Genetic Algorithms, Reinforcement Learning  Explanation:  This paper belongs to the sub-categories of Genetic Algorithms and Reinforcement Learning.   Genetic Algorithms: The paper proposes a method for evolving optimal populations using XCS classifier systems. This method involves the use of genetic algorithms to evolve the population of classifiers. The paper describes the use of crossover and mutation operators to generate new classifiers and the use of a fitness function to evaluate the performance of the classifiers.   Reinforcement Learning: The paper also belongs to the sub-category of Reinforcement Learning. The XCS classifier system is a type of reinforcement learning algorithm that learns from feedback in the form of rewards or punishments. The paper describes how the XCS algorithm uses reinforcement learning to evolve the population of classifiers. The paper also discusses the use of a reward function to guide the learning process and the use of exploration and exploitation strategies to balance the trade-off between exploration and exploitation.
Genetic Algorithms.   Explanation: The paper presents a problem-independent constraint handling mechanism for Genetic Algorithms (GAs) and applies it to solve the 3-SAT problem. The experiments conducted in the paper show that the proposed mechanism, Stepwise Adaptation of Weights (SAW), substantially increases GA performance. The paper also compares the SAW-ing GA with the best heuristic technique, WGSAT, and concludes that the GA is superior. Therefore, the paper primarily belongs to the sub-category of AI known as Genetic Algorithms.
Neural Networks.   Explanation: The paper focuses on the use of Artificial Neural Networks for regression and classification, and introduces a method to interpret the results of these models. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper describes the use of a standard genetic algorithm for feature selection. It also mentions the use of genetic programming for finding symbolic functions.  Neural Networks: The paper focuses on using evolutionary computation to select and transform features for use as inputs to a feedforward neural network. The success of the approach is evaluated on the prediction of unemployment rates in various European countries.
Rule Learning, Case Based.   Rule Learning is present in the text as the paper describes the construction of a knowledge base using several learning algorithms in concert with an inference engine.   Case Based is also present in the text as the paper presents a case study from the telecommunications domain and demonstrates the balanced cooperative modeling approach for the development of a knowledge-based application using MOBAL system.
Probabilistic Methods.   Explanation: The paper introduces a class of adaptive algorithms for source separation that are based on the idea of serial updating and implement an adaptive version of equivariant estimation. The performance of these algorithms depends only on the (normalized) distributions of the source signals, which are probabilistic in nature. The paper also provides close form expressions of convergence rates, stability conditions, and interference rejection levels via an asymptotic performance analysis, which further emphasizes the probabilistic nature of the approach.
Rule Learning, Theory.   The paper discusses the use of decision trees, which are a type of rule-based learning algorithm. The modification made to the bagging procedure involves using randomly-generated decision stumps, which are a type of decision tree with a depth of one. This modification is aimed at increasing the diversity of the decision trees used in the ensemble. The paper also discusses the theoretical hypothesis that boosting produces more diverse trees than bagging, and the empirical findings that support this hypothesis.
Probabilistic Methods, Theory.  The paper discusses the use of arcing algorithms, which involve adaptively reweighting the training set, growing a classifier using the new weights, and combining the classifiers constructed to date. The authors introduce a function called the edge, which is related to the margin and is used to understand arcing algorithms. They also derive a relation between the optimal reduction in the maximum value of the edge and the PAC concept of weak learner. These concepts are related to probabilistic methods and theory in AI.
Genetic Algorithms.   Explanation: The paper presents a new representation technique and a crossover operator for genetic algorithms to solve job shop scheduling problems. The paper focuses on the use of genetic algorithms as a search method to find optimal solutions to the problem. While other sub-categories of AI may also be relevant to job shop scheduling, such as rule learning or probabilistic methods, the paper specifically emphasizes the use of genetic algorithms.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents a network architecture for blind source separation and derives adaptation equations for the weights in the network.   Probabilistic Methods: The approach to blind source separation is based on the information maximization principle, which is a probabilistic method. The paper also mentions maximizing the information transferred through the network, which is another probabilistic concept.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper discusses the reference class problem in probability theory and how the specificity priority principle is the current solution accepted in the domain.  Theory: The paper presents a new approach, Non-Axiomatic Reasoning System (NARS), which is a theoretical framework for reasoning about conflicting beliefs. It also critiques the current solution in the two domains and argues that the solution provided by NARS is better.
Probabilistic Methods.   Explanation: The paper discusses the application of independent component analysis (ICA), which is a modern signal processing technique based on probabilistic methods, to multivariate financial time series. ICA involves linearly mapping the observed multivariate time series into a new space of statistically independent components (ICs), which can be viewed as a factorization of the portfolio since joint probabilities become simple products in the coordinate system of the ICs. The paper also mentions that ICA focuses on higher order statistics, which is another characteristic of probabilistic methods.
Theory.   Explanation: The paper presents a theoretical framework for understanding and inferring causation, rather than implementing a specific AI technique or algorithm. While the paper does mention an "effective algorithm for inferred causation," this is presented as a tool for implementing the theoretical framework, rather than as the focus of the paper.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper presents a method for inducing rules that are accurate and explainable with respect to the qualitative model.   Probabilistic Methods are present in the text as the paper discusses the quantification of the value of qualitative models in terms of their equivalence to additional training examples.
Neural Networks, Theory.   Neural Networks: The paper discusses a new explanation based learning method called EBNN that utilizes purely neural network representations. The paper explores the properties of this method and compares it to other EBL methods based on symbolic representations.   Theory: The paper discusses the concept of explanation based learning and explores the correspondence between neural network based EBL methods and EBL methods based on symbolic representations. The paper also discusses the properties of the EBNN algorithm, including its robustness to errors in the domain theory.
Genetic Algorithms.   Explanation: The paper discusses the performance of multi-parent crossover operators on numerical function optimization problems using genetic algorithms. The paper introduces and generalizes traditional crossover operators used in genetic algorithms. The focus of the paper is on the experimental results of using multi-parent crossover operators in genetic algorithms. Therefore, this paper belongs to the sub-category of AI known as Genetic Algorithms.
Case Based, Reinforcement Learning.   Case-based reasoning is mentioned multiple times throughout the text, as NACODAE is being developed under the Practical Advances in Case-Based Reasoning project. The purpose of NACODAE is to assist in decision aid tasks, which can be accomplished through the use of case-based reasoning.   Reinforcement learning is not explicitly mentioned, but the text does mention that NACODAE is being developed for the purpose of assisting in tasks such as crisis response planning and fault diagnosis. These tasks could potentially benefit from the use of reinforcement learning techniques.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of sensor information to accurately model autonomous systems, which requires automating the art of large-scale modeling. This involves probabilistic methods for estimating models from sensor data.  Theory: The paper presents a formalization of decompositional, model-based learning (DML), which is a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The paper also discusses the analogy between learning and consistency-based diagnosis, which is a theoretical concept.
Genetic Algorithms.   Explanation: The paper proposes an approach in which visual routines for simple tasks are evolved using Genetic Programming techniques. The results obtained are promising: the evolved routines are able to correctly classify up to 93% of the images, which is better than the best algorithm we were able to write by hand. Therefore, the paper belongs to the sub-category of AI known as Genetic Algorithms.
Reinforcement Learning, Theory.   Reinforcement learning is present in the paper as the authors discuss the idea of a reasoning system having goals or a utility function and acting based on its beliefs to indirectly assign utility to its beliefs. This is a key concept in reinforcement learning, where an agent learns to take actions that maximize a reward signal.   Theory is also present in the paper as the authors present a theory of knowledge goals, or desires for knowledge, and their use in the processes of understanding and learning. They also provide case studies to illustrate their theory.
Theory.   Explanation: The paper presents a theory of motivational analysis and the construction of volitional explanations, discussing the content and process of building such explanations. It does not focus on any specific sub-category of AI such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Genetic Algorithms, Neural Networks.   Genetic algorithms are mentioned in the abstract as the approach used for developing improved neural network architectures. The paper discusses the use of genetic algorithms for constructing backpropagation networks for real world tasks.   Neural networks are the main focus of the paper, as the title suggests. The paper presents a network representation with certain properties and shows results with various applications.
Case Based, Rule Learning  Explanation:   This paper belongs to the sub-category of Case Based AI because it describes a story understanding program that retrieves past explanations from situations already in memory and uses them to build explanations to understand novel stories about terrorism. The system refines its understanding of the domain by filling in gaps in these explanations, elaborating the explanations, or learning new indices for the explanations. This process is similar to how a case-based reasoning system retrieves past cases to solve new problems.  This paper also belongs to the sub-category of Rule Learning because it describes how the reasoner can improve its understanding of a domain by reasoning about a new situation, filling in gaps in its knowledge base, and gradually evolving a better understanding of the domain. The system uses past explanations stored in memory to build explanations for novel situations, and it refines its understanding of the domain by learning new indices for the explanations. This process is similar to how a rule learning system learns new rules from examples.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents a method that uses unsupervised training to provide prediction experts for the inherent dynamical modes. These experts are neural networks that are trained to predict the next value of the time series given the previous values.  Probabilistic Methods: The trained experts are then used in a hidden Markov model that allows for the modeling of drifts. The hidden Markov model is a probabilistic method that models the probability of transitioning from one mode to another and the probability of observing a particular value given the current mode.
Rule Learning, Theory.   Explanation: The paper discusses techniques for refining incomplete theories through the creation and utilization of intermediate concepts, which is a key aspect of rule learning. The paper also explicitly mentions the EITHER theory refinement system, which is a theoretical framework for rule learning. Therefore, the paper belongs to the sub-category of AI known as Rule Learning. Additionally, the paper is focused on the refinement of theories, which is a fundamental aspect of AI research related to Theory.
Reinforcement Learning, Neural Networks  The paper belongs to the sub-category of Reinforcement Learning as it presents a new algorithm for improving advantage updating in reinforcement learning systems. It also discusses the application of reinforcement learning to a Markov game.  The paper also belongs to the sub-category of Neural Networks as it uses a single-hidden-layer sigmoidal network to store the advantage function. It also presents a new algorithm, Incremental Delta-Delta, for use in incremental training of neural networks.
Probabilistic Methods.   Explanation: The paper discusses an adaptation of the "peak seeking" regime used in unsupervised learning processes such as competitive learning and k-means, which enables the learning to capture low-order probability effects and thus to more fully capture the probabilistic structure of the training data. This indicates that the paper is focused on probabilistic methods in AI.
Case Based, Rule Learning.   Case Based: The paper discusses the use of cases in the SeqER system for planning scientific experiments. The system uses derivational analogy to reuse planning experience captured as cases. Cases are retrieved from a large casebase using massively parallel methods.   Rule Learning: The paper also mentions the use of rule-based methods in the SeqER system for planning experiments. These methods are used in conjunction with derivational analogy to integrate automated planning techniques with domain knowledge.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses belief networks, which are a type of probabilistic graphical model used to represent and reason about uncertainty. The authors explore various aspects of belief networks, including their structure, inference algorithms, and learning methods.   Theory: The paper presents a theoretical analysis of belief networks, discussing their properties and limitations. The authors also compare belief networks to other types of probabilistic models, such as Bayesian networks and Markov random fields, and discuss the advantages and disadvantages of each.
Reinforcement Learning, Probabilistic Methods.   Reinforcement Learning is present in the text as the paper discusses the design of embedded agents and the importance of finding good monitoring strategies. Reinforcement learning is a type of machine learning that involves an agent learning to make decisions in an environment by receiving feedback in the form of rewards or punishments. The monitoring strategies discussed in the paper are aimed at improving the performance of the embedded agent, which is a key goal of reinforcement learning.  Probabilistic Methods are also present in the text as the paper discusses the mathematical and empirical analysis of monitoring strategies for a wide class of problems. Probabilistic methods involve using probability theory to model and analyze complex systems, and the analysis of monitoring strategies in the paper involves mathematical and statistical analysis of the performance of different strategies.
Probabilistic Methods.   Explanation: The paper discusses Bayesian network model learning, which is a probabilistic method used for modeling uncertain relationships between variables. The focus of the paper is on creating Bayesian network models that are tailored to a specific goal or purpose, which is a key aspect of probabilistic methods. The paper also discusses the K2 algorithm, which is a popular probabilistic method for learning Bayesian networks.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as it discusses the bias and variance of estimators provided by temporal difference value estimation algorithms in absorbing Markov chains.   Theory is also applicable as the paper provides analytical expressions for the mean squared error curves in temporal difference learning, illustrating classes of learning curve behavior and sensitivity to parameter choices.
Theory.   Explanation: This paper presents a theoretical approach to the problem of minimizing misclassified points by a plane in n-dimensional real space. It formulates the problem as a linear program with equilibrium constraints (LPEC) and proposes a Frank-Wolfe-type algorithm for solving the associated penalty problem. The paper does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods.   Explanation: The paper introduces a new approach to optimal compression using a probabilistic method called Boltzmann distribution. The expectation-maximization parameter estimation algorithms used in the approach also involve probabilistic methods.
Probabilistic Methods.   Explanation: The paper focuses on explaining predictions and recommendations of probabilistic systems, specifically Bayesian Networks and Influence Diagrams. The algorithm presented in the paper is designed to compute predictive explanations in these probabilistic models. Therefore, the paper is most closely related to the sub-category of AI known as Probabilistic Methods.
Probabilistic Methods.   Explanation: The paper discusses the Minimum Description Length (MDL) and Minimum Message Length (MML) principles, which are probabilistic methods used for model selection and inference. The paper compares and contrasts these two methods, highlighting their similarities and differences. Therefore, this paper belongs to the sub-category of AI that deals with probabilistic methods.
Neural Networks.   Explanation: The paper focuses on the performance analysis of the CNS-1, a supercomputer designed for training and evaluating large multilayered feedforward neural networks. The study uses sophisticated coding to optimize the performance of the machine during recall and training, and analyzes the impact of different parameters on its performance. Therefore, the paper is primarily related to the sub-category of Neural Networks in AI.
Case Based.   Explanation: The paper presents an interactive, case-based approach to crisis response using Inca, which relies on case-based methods to seed the response development process with initial candidate solutions drawn from previous cases. The paper also discusses an artificial hazardous materials domain, Haz-Mat, that was developed for the purpose of evaluating candidate assistant mechanisms for crisis response.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses the use of machine learning to automatically adapt the behavior of a scheduling assistant to accommodate different users. This involves learning user models, which can be seen as a probabilistic approach to modeling user behavior.  Reinforcement Learning: The paper describes an empirical study of learning user models in an adaptive assistant for crisis scheduling. The goal of the learning task is to predict user operations, which can be seen as a reinforcement learning problem where the system learns to take actions that maximize a reward signal (i.e., accurately predicting user behavior). The paper also discusses the use of reinforcement learning techniques such as Q-learning in future work.
This paper belongs to the sub-category of AI called Case Based.   Explanation: The paper presents a framework called CABINS (Case-Based INcremental Schedule improvement and reactive repair) which is based on the idea of using past cases to improve future schedules. The framework involves acquiring knowledge from past cases, iteratively revising schedules based on this knowledge, and using reactive repair techniques to handle unexpected events. This approach is characteristic of Case-Based reasoning, which involves solving new problems by adapting solutions from similar past cases.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper discusses Bayesian approaches for determining non-informative prior distributions in a parametric model family, specifically the family of Bayesian networks.  Theory: The paper presents and compares different theoretical approaches for determining non-informative priors, including Bayesian and information-theoretic methods. It also discusses the modified definition of stochastic complexity by Rissanen and the Minimum Message Length (MML) approach by Wallace.
Theory.   Explanation: The paper presents a theory for a goal-based approach to intelligent information retrieval, which addresses the representation of knowledge goals, methods for generating and transforming these goals, and heuristics for selecting among potential inferences in order to feasibly satisfy such goals. The paper does not discuss or apply any of the other sub-categories of AI listed.
Reinforcement Learning, Distributed AI.   Reinforcement learning is the main focus of the paper, as the authors propose using communication as reinforcement to overcome the credit assignment problem between agents.   Distributed AI is also relevant, as the paper discusses fully distributed multi-agent systems with multiple agents/robots learning in parallel while interacting with each other. The authors propose using communication to reduce the undesirable effects of locality in these systems.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms: The paper investigates the power of genetic algorithms at solving the MAX-CLIQUE problem. It measures the performance of a standard genetic algorithm and introduces a new genetic algorithm, the multi-phase annealed GA, which exhibits superior performance. The paper also discusses modifications made to the genetic algorithm, such as changes in input representation and systematic local search, to improve its performance.  Probabilistic Methods: The genetic algorithm is a probabilistic method that uses randomization to search for solutions. The paper discusses the use of random graphs and embedded cliques in the problem instances. The paper also discusses the need for diversity enhancement to avoid premature convergence to local minima.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses the use of statistical models such as mixtures of Gaussians and locally weighted regression, which are probabilistic methods.  Neural Networks: The paper also discusses the use of feedforward neural networks and how statistical techniques can be used to select data for them.
Reinforcement Learning.   Explanation: The paper's title explicitly mentions "Reinforcement Learning" as the focus of the research. The abstract also provides a brief overview of the paper's objective, which is to design and analyze efficient reinforcement learning algorithms. Therefore, it is clear that this paper belongs to the sub-category of AI known as Reinforcement Learning.
Theory.   Explanation: The paper is focused on theoretical analysis of learning curves in the context of machine learning, without discussing any specific algorithm or application. The authors analyze the convergence rates of different types of learners for various concept classes, and draw theoretical boundaries between rational and exponential convergence. The paper does not involve any practical implementation or experimentation with specific AI techniques, and does not discuss any specific sub-category of AI such as neural networks or reinforcement learning.
Rule Learning, Neural Networks  Explanation:  The paper primarily belongs to the sub-category of Rule Learning as it presents a novel method for extracting symbolic rules from trained neural networks. The paper describes algorithms for extracting both conjunctive and M-of-N rules, and presents experiments that show that their method is more efficient than conventional search-based approaches.   The paper also belongs to the sub-category of Neural Networks as it deals with understanding trained neural networks and extracting rules from them. The paper exploits the property that networks can be efficiently queried and presents a method that casts rule extraction not as a search problem, but instead as a learning problem.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods: The approach taken in this paper involves dividing the grid into stripes and using a knapsack integer program to efficiently solve the problem. This knapsack problem is a well-known optimization problem that can be solved using probabilistic methods such as dynamic programming or branch and bound.   Rule Learning: The algorithm presented in this paper involves a specific set of rules for dividing the grid into stripes and using the knapsack problem to generate the grid region assignments. These rules are based on mathematical principles and are designed to optimize the solution to the minimum perimeter problem.
Neural Networks.   Explanation: The paper presents and evaluates two algorithms for constructing Radial Basis Function Networks, which are a class of neural networks. The paper does not mention any other sub-categories of AI.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper proposes a real-valued genetic algorithm (GA) to optimize the number and positions of fuzzy prototypes. The GA acts on all of the classes at once and measures fitness as classification accuracy, allowing the system to profit from global information about class interaction.   Neural Networks: The paper presents the concept of a receptive field for each prototype, which is used to replace the classical, fixed distance-based membership function by an infinite fuzzy support membership function. This new membership function is inspired by that used in the hidden layer of RBF networks.
Theory.   Explanation: The paper focuses on theoretical analysis of the performance of gradient descent in on-line linear prediction, and provides worst-case bounds on the sum of squared prediction errors under various assumptions. The paper does not discuss the implementation or application of any specific AI techniques such as neural networks, probabilistic methods, or reinforcement learning.
Theory.   Explanation: The paper discusses the theoretical analysis of the complexity of learning classes of smooth functions using the mistake-bound model. It does not involve any practical implementation or application of AI techniques such as neural networks, genetic algorithms, or reinforcement learning.
Case Based, Theory  Explanation:  - Case Based: The paper discusses the use of nearest-neighbor algorithms, which are a type of case-based reasoning.  - Theory: The paper investigates the use of a specific distance metric and evaluates its effectiveness through empirical study. The paper also discusses the trade-off between bias and variance in feature weighting.
Probabilistic Methods, Case Based  Explanation:   Probabilistic Methods: The paper discusses the use of probabilistic methods in machine learning, specifically in the context of estimating the quality of attributes. The RELIEF algorithm, which is the focus of the paper, uses probabilistic methods to estimate the relevance of attributes.  Case Based: The paper discusses the use of the RELIEF algorithm in various artificial and real-world problems, which is a characteristic of case-based reasoning. The algorithm uses examples to estimate the quality of attributes, which is also a characteristic of case-based reasoning.
Probabilistic Methods.   The paper presents an analysis of the nearest neighbor algorithm using a uniform distribution over the instance space and calculating probabilities of correct classification based on the distance between test instances and the prototype of the concept, as well as the distance between the nearest stored training case and the test instance. The analysis also takes into account the number of relevant and irrelevant attributes. These probabilistic methods are used to predict learning curves for artificial domains and are experimentally validated.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses the use of genetic algorithms for combinatorial optimization problems. It describes how the algorithm works and how it can be applied to various optimization problems.   Theory: The paper presents a theoretical framework for eugenic evolution, which is a modification of genetic algorithms that incorporates principles of eugenics. It discusses the ethical implications of this approach and proposes guidelines for its use.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper describes a simple model of coevolution that includes the addition of genes for longevity and mutation rate in individuals. This is a classic example of a genetic algorithm, where the fitness of individuals is determined by their ability to survive and reproduce in a given environment, and their genetic makeup is subject to mutation and recombination.  Theory: The paper presents a theoretical model of coevolution and mutation rates, exploring the consequences of different types of interactions between individuals. The authors use mathematical and computational methods to analyze the behavior of the model and draw conclusions about the evolution of mutation rates.
Genetic Algorithms, Reinforcement Learning  Explanation:  This paper belongs to the sub-categories of Genetic Algorithms and Reinforcement Learning.   Genetic Algorithms: The paper mentions the use of a learning classifier system based on genetics. This is a type of genetic algorithm that uses a population of rules to evolve and improve over time.   Reinforcement Learning: The paper discusses the use of a simulated robot that learns through trial and error, which is a key characteristic of reinforcement learning. The robot learns to acquire certain behaviors based on rewards and punishments, which is a common approach in reinforcement learning.
Probabilistic Methods.   Explanation: The paper explicitly states that it gives a probabilistic interpretation to instance-based learning and performs Bayesian inference with a mixture of prototype distributions. The other sub-categories of AI (Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, Theory) are not mentioned or discussed in the text.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms: The paper presents a comparative study of genetic algorithms and their search properties when treated as a combinatorial optimization technique. The authors show that for large and difficult MAX-SAT instances, the contribution of cross-over to the search process is marginal. Little is lost if it is dispensed altogether, running mutation and selection as an enlarged Metropolis process.   Probabilistic Methods: The paper compares genetic algorithms to the Metropolis process and simulated annealing, which are both probabilistic methods. The authors show that for these problem instances, genetic search consistently performs worse than simulated annealing when subject to similar resource bounds. The correspondence between the two algorithms is made more precise via a decomposition argument, and provides a framework for interpreting the results.
Theory  Explanation: The paper discusses the concept of constructive induction and argues for a specific definition of it. It does not focus on any specific AI sub-category such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. Therefore, the paper belongs to the Theory sub-category.
Probabilistic Methods.   Explanation: The paper focuses on the use of probabilistic models for combinatorial optimization problems and presents an algorithm, COMIT, that combines probabilistic modeling with fast search techniques. The paper also includes a review of probabilistic modeling for combinatorial optimization. While other sub-categories of AI may also be relevant to the topic of combinatorial optimization, the primary focus of this paper is on probabilistic methods.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper describes a stochastic search method based on a generalization of simulated annealing, which is a probabilistic method for finding global optima in a search space. The system named SFOIL uses this method to alleviate the local optimization problem in Inductive Logic Programming.  Neural Networks: The stochastic search method used in SFOIL is based on a Markovian neural network, which is a type of neural network that models the probability distribution of a sequence of states. The paper describes how this neural network is used in the stochastic search method to explore the search space in a more effective way.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of Radial Basis Function (RBF) neural networks for financial time series analysis. The authors explain how RBF networks can be used to model the non-linear relationships between financial variables and predict future values.  Probabilistic Methods: The paper also discusses the use of Gaussian mixture models (GMMs) for financial time series analysis. GMMs are a probabilistic method used to model the distribution of data. The authors explain how GMMs can be used to identify patterns in financial data and make predictions based on those patterns.
Probabilistic Methods.   Explanation: The paper presents MIMIC, a framework that uses probability densities to guide a randomized search through the solution space. The algorithm estimates the global structure of the optimization landscape and uses this knowledge to refine the search. The approach is based on probabilistic methods and obtains significant speed gains over other randomized optimization procedures.
Reinforcement Learning, Rule Learning  Explanation:  This paper belongs to the sub-categories of Reinforcement Learning and Rule Learning. Reinforcement Learning is present in the use of the XCS classifier system to learn from rewards and punishments in the environment. Rule Learning is present in the XCS classifier system's ability to learn rules from the environment and generalize them to new situations. The paper specifically focuses on the generalization capabilities of XCS, which is a key aspect of Rule Learning.
Probabilistic Methods.   Explanation: The paper presents a method for inducing selective Bayesian network classifiers, which is a probabilistic method in the field of artificial intelligence. The paper discusses the use of information-theoretic metrics to efficiently select a subset of attributes from which to learn the classifier, which is a common approach in probabilistic methods. The paper also compares the proposed method with existing selective Bayesian network induction approaches, which are also probabilistic methods.
This paper belongs to the sub-category of AI known as Genetic Algorithms.   Explanation: The paper discusses the use of evolutionary algorithms, specifically genetic algorithms, for the design of neural architectures. The authors provide a taxonomy of different approaches to evolutionary design of neural networks and review literature related to this topic. While other sub-categories of AI, such as Neural Networks and Reinforcement Learning, are also mentioned in the paper, the focus is primarily on the use of genetic algorithms.
Case Based, Theory  Explanation:  The paper deals with the problem of choosing the best similarity measure in the context of instance-based learning of classifications, which is a key component of case-based reasoning systems. The paper also presents a theory of optimal similarity measures and proves the optimality of a specific similarity measure within a restricted class. Therefore, the paper belongs to the sub-category of Case Based AI. Additionally, the paper presents a theory of optimal similarity measures, which falls under the sub-category of Theory in AI.
Reinforcement Learning.   Explanation: The paper explores the problem of learning the Gittins indices on-line without the aid of a process model, and suggests utilizing process-state-specific Q-learning agents to solve their respective restart-in-state-i subproblems. The example provided in the paper also applies online reinforcement learning to a problem of stochastic scheduling, which is a classic application of reinforcement learning. Therefore, the paper belongs to the sub-category of Reinforcement Learning in AI.
Theory.   Explanation: The paper analyzes the performance of top-down algorithms for decision tree learning and proves that they are boosting algorithms. The focus is on theoretical analysis rather than practical implementation or application of AI techniques such as neural networks or reinforcement learning.
Rule Learning, Theory.   Rule Learning is present in the text as the paper discusses the generation of binary decision trees using a greedy algorithm. Theory is also present as the paper presents a counter example to a hypothesis and discusses the effectiveness of different impurity functions in generating optimal decision trees.
Genetic Algorithms, Neural Networks, Reinforcement Learning.   Genetic Algorithms: The paper proposes a novel evolutionary learning approach to designing a modular system automatically, which is based on speciation using a technique based on fitness sharing. This is a common technique used in genetic algorithms.  Neural Networks: The success of modular artificial neural networks in speech and image processing is mentioned as a typical example of a modular approach to solving difficult problems.  Reinforcement Learning: The paper discusses improving co-evolutionary game learning, specifically learning to play iterated prisoner's dilemma, which is a type of reinforcement learning. The paper also mentions the poor generalization ability and sudden mass extinctions of earlier co-evolutionary learning, which is a common problem in reinforcement learning. The proposed approach improves the generalization ability of the system.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses the use of probability to minimize demands on sensing, as well as the use of statistical learning methods to gradually reduce sensory load as the system gains experience in a domain.  Reinforcement Learning: The paper describes the Icarus architecture, which operates in cycles and activates a state that matches the environmental situation, letting that state control behavior until its conditions fail or until finding another matching state with higher priority. This is a form of reinforcement learning, where the system learns to select actions based on the feedback it receives from the environment. The paper also reports experimental evaluations of the system's ability to reduce sensory load through learning mechanisms, which is a key aspect of reinforcement learning.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper uses the genetic algorithm to play Iterated Prisoner's Dilemma and evaluates each member of the population based on its performance against other members of the current population. The paper also discusses the impact of seeding the population with expert strategies and the importance of maintaining genetic diversity.   Reinforcement Learning: The paper discusses the evolution of strategies in a dynamic environment where the algorithm is optimizing to a moving target, causing an "arms race" of innovation. The paper also studies the robustness of the strategies evolved and how they perform against a wide variety of opponents. The example of a population of nave cooperators being exploited by a defect-first strategy demonstrates the importance of learning from experience and adapting to changing circumstances.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes unsupervised learning algorithms based on neural networks with feedback connections.   Probabilistic Methods: The algorithms aim to minimize the reconstruction error of the encoders, which can be seen as a probabilistic approach to modeling the input data.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper describes an architecture called DOLCE, which is a standard recurrent neural net trained by gradient descent.   Probabilistic Methods: The paper describes how DOLCE learns to recover the discrete state with maximum a posteriori probability from the noisy state. The adaptive clustering technique used in DOLCE quantizes the state space, which is a probabilistic method.
Probabilistic Methods, Neural Networks, Theory.   Probabilistic Methods: The paper proposes a statistical mechanical framework for modeling discrete time series using maximum likelihood estimation via Boltzmann learning in one-dimensional networks with tied weights. The Boltzmann chains contain hidden Markov models (HMMs) as a special case.   Neural Networks: The paper introduces Boltzmann chains as a new architecture for modeling time series data, which addresses some of the shortcomings of HMMs. The paper also discusses two new architectures: parallel chains and looped networks, and shows how to implement the Boltzmann learning rule exactly, in polynomial time, without resort to simulated or mean-field annealing.   Theory: The paper presents a theoretical framework for modeling time series data using Boltzmann chains and hidden Markov models. The paper also discusses the exact decimation procedures from statistical mechanics that are used to implement the Boltzmann learning rule.
Genetic Algorithms.   Explanation: The paper explicitly mentions the design and evaluation of three versions of genetic algorithms for computing vector quantizers. The use of genetic algorithms is the main focus of the paper, and there is no mention of other sub-categories of AI such as neural networks or reinforcement learning.
Rule Learning.   Explanation: The paper discusses the construction of composite features (m-of-n concepts) as internal nodes of decision trees, which is a common approach in rule learning. The paper explores different greedy methods for building these concepts and evaluates their effectiveness on various data sets. There is no mention of any other sub-category of AI in the text.
Rule Learning, Theory.   Explanation:  - Rule Learning: The paper presents a new approach to handling numerical information in Inductive Logic Programming (ILP), which is a subfield of rule learning. The approach, called First Order Regression (FOR), combines ILP and numerical regression to induce first-order logic descriptions that are amenable to numerical regression among real-valued variables. The program Fors is an implementation of this idea, where numerical regression is focused on a distinguished continuous argument of the target predicate. The paper describes applications of Fors on several real-world data sets, indicating that it is an effective tool for ILP applications that involve numerical data.  - Theory: The paper presents a theoretical framework for combining ILP and numerical regression, and shows that FOR can be viewed as a generalisation of the usual ILP problem. The paper also discusses the properties of FOR and its relationship to other ILP approaches.
Case Based, Probabilistic Methods, Theory.  Case Based: The paper uses a well-documented example of the invention of the telephone by Alexander Graham Bell to explore the mechanisms of goal handling processes involved in invention.  Probabilistic Methods: The paper proposes mechanisms to explain how Bell's early thematic goals gave rise to new goals to invent the multiple telegraph and the telephone, and how the new goals interacted opportunistically.  Theory: The paper presents a theoretical framework for understanding goal handling processes involved in invention, identifying new kinds of goals with special properties and mechanisms for processing such goals, as well as means of integrating opportunism, deliberation, and social interaction into goal/plan processes. The paper also describes a computational model, ALEC, that accounts for the role of goals in invention.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses a simple model of coevolution that includes the evolution of a gene for the mutation rate of the individual. This gene is subject to selection pressures and evolves over time, which is a key characteristic of genetic algorithms.  Theory: The paper presents a theoretical model of coevolution and mutation rates, and discusses the implications of this model for understanding the evolution of different genes. The paper does not involve any practical implementation or application of AI techniques, but rather presents a theoretical framework for understanding the evolution of mutation rates.
Reinforcement Learning.   Explanation: The paper discusses the problem of optimizing learning in environments where data-query is not free and the cost of a query depends on the distance from the current location in state space to the desired query point. The authors propose an algorithm based on Kaelbling's DG-learning algorithm, which is a reinforcement learning algorithm that uses distance relationships to guide exploration in state space. The paper also discusses the tradeoff between the potential benefit of exploring a state and the cost of reaching that state, which is a key concept in reinforcement learning. Therefore, this paper belongs to the sub-category of Reinforcement Learning in AI.
Neural Networks.   Explanation: The paper discusses the capabilities of Single Layer Recurrent Neural Networks (SLRNNs) with hard-limiting neurons, and compares the representational power of first-order and second-order SLRNNs. The paper also discusses how augmented first-order SLRNNs can be used to efficiently implement finite-state recognizers using state-splitting. Therefore, the paper primarily belongs to the sub-category of Neural Networks in AI.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses the implementation and comparison of several artificial neural networks (ANNs) for learning the past tense of English verbs.   Rule Learning: The paper presents a general-purpose Symbolic Pattern Associator (SPA) based on the decision-tree learning algorithm ID3. The SPA is a rule-based model that learns patterns in the data to make predictions about the past tense of unseen verbs. The paper also discusses a new default strategy for decision-tree learning algorithms, which is a rule-based approach.
Probabilistic Methods.   Explanation: The paper specifically discusses the need for a mechanism to explain probabilistic systems and proposes an approach to defining a notion of better explanation in such systems. The other sub-categories of AI listed (Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, Theory) are not directly relevant to the content of the paper.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper presents a cooperative coevolutionary approach to learning complex structures, which involves the parallel evolution of substructures. This approach is based on genetic algorithms, which are used to evolve the substructures.  Neural Networks: The paper does not explicitly mention neural networks, but the cooperative coevolutionary approach involves the interaction of substructures to form higher level structures, which is similar to the way neural networks are constructed. Additionally, the architecture is designed to be general enough to incorporate a priori knowledge, which is a common feature of neural network models.
Theory.   This paper presents an analytic comparison of different techniques for bounding the H1-norm of nonlinear systems with saturation. The focus is on theoretical analysis and comparison of the different techniques, rather than on the application of AI methods. While some of the techniques may involve AI methods (such as neural networks for modeling the nonlinearities), the paper does not focus on the use or development of these methods. Therefore, the paper belongs to the sub-category of Theory.
Reinforcement Learning, Rule Learning  Reinforcement Learning is present in the text as the paper discusses the need for an intelligent system to adapt and learn from the environment through continuous interaction and experimentation. This is a key aspect of reinforcement learning, where an agent learns to take actions in an environment to maximize a reward signal.  Rule Learning is also present in the text as the paper proposes a practical approach to learning from the environment by pinpointing faults in the domain knowledge that cause unexpected behavior and resorting to experimentation to correct the system's knowledge. This approach involves learning rules about the environment and updating them incrementally based on new information.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses the architecture of a neural network and proposes a pruning heuristic to improve its generalization performance. It also presents simulations of training and pruning a recurrent neural network on strings generated by regular grammars.  Rule Learning: The paper shows that rules extracted from pruned networks are more consistent with the rules to be learned, indicating that the pruning method improves the network's ability to learn rules.
Theory  Explanation: This paper focuses on the theoretical analysis of model selection problems and the bias/variance decomposition. It does not involve the implementation or application of any specific AI sub-category such as neural networks or reinforcement learning.
Neural Networks, Probabilistic Methods, Theory.  Neural Networks: The paper mentions that the individual fits may be from something more complex like a neural network.  Probabilistic Methods: The paper discusses combination methods based on the bootstrap and analytic methods.  Theory: The paper develops a general framework for the problem of combining regression fit vectors and examines a recent cross-validation-based proposal called "stacking" in this context. The paper also applies these ideas to classification problems where the estimated combination weights can yield insight into the structure of the problem.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods: The paper compares the performance of different machine learning algorithms, including the semi naive Bayesian classifier, which is a probabilistic method. The authors also analyze the combination of decisions of several classifiers, which involves probabilistic reasoning.  Rule Learning: The paper discusses the performance and explanation abilities of different machine learning algorithms in predicting the femoral neck fracture recovery. The authors mention that the semi naive Bayesian classifier and Assistant-R seem to be the most appropriate algorithms. These algorithms are based on rule learning, which involves learning decision rules from data. The authors also analyze the combination of decisions of several classifiers, which can be seen as a form of rule learning.
Neural Networks.   Explanation: The paper suggests the use of fully recurrent neural networks with Fourier-type activation functions to fit sequential input/output data. The main theoretical advantage mentioned is related to the solvability of recovering internal coefficients from input/output data in closed form. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Genetic Algorithms, Theory.   Genetic Algorithms is the primary sub-category of AI that this paper belongs to, as it discusses the use of Island Model Genetic Algorithms and their performance in tracking multiple search trajectories. The paper also delves into the theoretical aspects of how Island Models can preserve genetic diversity.   Theory is another sub-category that applies to this paper, as it explores the underlying principles and mechanisms of Island Model Genetic Algorithms and their potential advantages in processing linearly separable problems.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of noisy bootstrap as a smoothness and capacity control technique for training feed-forward networks. It also mentions the use of weight decay regularization and ensemble averaging, which are commonly used techniques in neural network training.  Probabilistic Methods: The paper discusses the use of noisy bootstrap as a regularization technique for statistical methods such as generalized additive models. The paper also demonstrates the effectiveness of the combination of noisy bootstrap and ensemble averaging on the Cleveland Heart Data, which is a probabilistic modeling problem.
Probabilistic Methods.   Explanation: The paper focuses on defining classes of prior distributions for parameters and latent variables in autoregressive time series models, which is a key aspect of probabilistic modeling. The paper also discusses posterior analysis and inference, which is a common theme in probabilistic methods.
Probabilistic Methods.   Explanation: The paper focuses on Bayesian inference, which is a probabilistic method for statistical inference. The authors use a novel class of priors on parameters of latent components to provide smoothness priors on autoregressive coefficients, which allows for formal inference on model order and incorporation of uncertainty about model order into summary inferences. The paper also discusses the use of Bayesian inference in analyzing the frequency composition of time series and in overcoming problems in spectral estimation with autoregressive models using more traditional model fitting methods.
Neural Networks, Genetic Algorithms.   Neural Networks: The paper presents a new method for training multilayer perceptron networks, which are a type of neural network. The method involves dynamically allocating nodes and layers as needed, and training individual nodes using a genetic algorithm.   Genetic Algorithms: The method described in the paper involves training individual nodes of the network using a genetic algorithm. The paper also mentions that simulation results show that the method performs favorably in comparison with other learning algorithms, which suggests that the genetic algorithm component is a key factor in its success.
Reinforcement Learning, Neural Networks.   Reinforcement Learning is the main focus of the paper, as the program uses the method of temporal difference learning to train its artificial neural network to play draughts. The paper discusses the relative contribution of various factors to the strength of the TDplayer produced by the system, such as board representation, search depth, training regime, architecture, and run time parameters.   Neural Networks are also present in the paper, as the program uses an artificial neural network trained by the method of temporal difference learning to learn how to play the game of draughts. The paper discusses the role of architecture in the strength of the TDplayer produced by the system.
Case Based, Rule Learning.   Case Based AI is present in the paper through the application of similarity-based case retrieval to the KOSIMO database of international conflicts. Rule Learning AI is present in the paper through the analysis of the CONFMAN database of successful and unsuccessful conflict management attempts with an inductive decision tree learning algorithm.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses methods for selecting relevant features and examples in machine learning, which often involve probabilistic models and techniques such as Bayesian inference. For example, the authors mention "probabilistic relevance models" and "Bayesian feature selection" as approaches to feature selection.  Theory: The paper also discusses theoretical advances in machine learning related to feature and example selection. The authors describe a general framework for comparing different methods and mention "theoretical work in machine learning" as a source of progress on these topics. Additionally, the paper closes with a discussion of challenges for future work in this area, which includes theoretical questions such as "how to design algorithms that are robust to noise and outliers."
Probabilistic Methods.   The paper describes and illustrates Bayesian approaches to modelling and analysis of multiple non-stationary time series. It focuses on uni-variate models for collections of related time series assumedly driven by underlying but unobservable processes, referred to as dynamic latent factor processes. The models use time-varying autoregressions capable of flexibly representing ranges of observed non-stationary characteristics. The paper also highlights concepts and new methods of time series decomposition to infer characteristics of latent components in time series, which is a probabilistic method. The paper discusses current and future research directions in this area.
Theory.   Explanation: The paper discusses algorithms and strategies for solving the -subsumption problem in ILP learning systems, which is a theoretical problem related to logic and inference. The paper does not discuss any specific application or implementation of AI, such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Case Based, Theory  Explanation:  - Case-based planning is the main topic of the paper, which falls under the sub-category of Case Based AI. - The paper also discusses the advantages of partial-order planners over state-space planners in the context of case-based planning, which involves theoretical analysis and comparison. Therefore, it can also be considered as belonging to the sub-category of Theory in AI.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper discusses an algorithm that searches for features defined by substructures, where each clause represents a binary feature. The algorithm conducts a top-down search for first-order clauses, which is a common approach in rule learning.   Probabilistic Methods are present in the text as the algorithm described in the paper is stochastic, meaning it involves randomness and probability. The paper also mentions that preliminary experiments are favorable, which suggests that the algorithm's performance is evaluated using probabilistic methods.
Neural Networks.   Explanation: The paper presents a neural network model for predicting turning points in the gold bullion market based on historical data. The model is a simple recurrent neural network that was trained on daily closing prices of ten market indices over a period of five years. The paper demonstrates that the model has significant predictive power and can be used to time transactions in the gold bullion and gold mining company stock index markets to obtain a significant paper profit. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods.   Explanation: The paper presents a Bayesian approach to the discovery of causal models, which is a probabilistic method. The Minimum Message Length (MML) method used in the approach is also a probabilistic method. The paper does not mention any other sub-categories of AI.
Reinforcement Learning.   Explanation: The paper is specifically about a new algorithm for associative reinforcement learning, and discusses the performance of this algorithm in comparison to other reinforcement learning rules. While neural networks are mentioned as a type of architecture that can be used with the algorithm, the focus of the paper is on the reinforcement learning aspect.
Case Based, Theory  Explanation:  This paper belongs to the sub-category of Case Based AI because it focuses on the k-nearest neighbor (k-NN) classifier, which is a lazy learning algorithm that stores instances and uses them to generate predictions. The paper also discusses weight-setting methods for k-NN, which is a common technique in case-based reasoning to adjust the relevance of features in similarity calculations.  The paper also belongs to the sub-category of Theory because it introduces a framework for categorizing and comparing weight-setting methods for lazy learning algorithms. The paper discusses the advantages and disadvantages of different methods based on empirical evaluations, which contribute to the theoretical understanding of lazy learning algorithms.
Case Based, Rule Learning  Explanation:   - Case Based: This paper presents a novel approach to learning concept descriptions from examples, which is applicable when only a few examples are classified as positive (and negative) instances of a concept. This is similar to the idea of case-based reasoning, where a system learns from a small set of cases to make decisions or solve problems. - Rule Learning: The approach presented in the paper tries to take advantage of the information which can be induced from descriptions of unclassified objects using a conceptual clustering algorithm. This can be seen as a form of rule learning, where the system learns rules or patterns from data to make predictions or classifications. The system Cola, which is described in the paper, also uses a rule-based approach to generate characteristic concept descriptions.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses Local Selection, which is a selection scheme used in evolutionary algorithms, a subfield of Genetic Algorithms. The paper compares Local Selection with other selection schemes and discusses its advantages and disadvantages.  Theory: The paper presents a theoretical analysis of Local Selection and its performance on different problem classes. It discusses the selection pressure applied by Local Selection and its impact on maintaining diversity in the population. The paper also discusses the efficiency of Local Selection and its suitability for parallel implementations.
Probabilistic Methods.   Explanation: The paper discusses the development and applicability of a classification algorithm based on calibrated radar signatures measured from ERS-1 and JERS-1 SAR image data. The algorithm is designed to be stable in terms of applicability in different geographical regions, and the paper compares its applicability in two different test sites. The use of calibrated radar signatures suggests a probabilistic approach to classification, where the algorithm assigns probabilities to different classes based on the measured radar signatures. There is no mention of other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper investigates the common processes of data averaging and data snooping in the context of neural networks, one of the most popular AI machine learning models. The paper also discusses the distribution of performance for neural networks in common problems.  Probabilistic Methods: The paper discusses the assumption of Gaussian distribution in data averaging and how it can significantly affect the interpretation of results, especially those of comparison studies. The paper proposes new guidelines for reporting performance which provide more information about the actual distribution (e.g. box-whiskers plots). The paper also emphasizes the importance of appropriate statistical tests and ensuring that any assumptions made in the tests are valid (e.g. normality of the distribution), which are key aspects of probabilistic methods.
Genetic Algorithms.   Explanation: The paper discusses the use of evolutionary algorithms (EAs) in studying non-coding DNA, specifically introns. Genetic algorithms are a type of evolutionary algorithm that use principles of natural selection and genetics to optimize solutions to problems. The paper provides a biological background on non-coding DNA and introns to better understand and conduct research using EAs. Therefore, genetic algorithms are the most related sub-category of AI to this paper.
Reinforcement Learning, Probabilistic Methods, Neural Networks.   Reinforcement Learning is the main focus of the paper, as the authors use it to study multiagent learning in simulated soccer. They compare two reinforcement learning algorithms: TD-Q learning with linear neural networks and Probabilistic Incremental Program Evolution (PIPE).   Probabilistic Methods are also used in the PIPE algorithm, which uses adaptive "probabilistic prototype trees" to synthesize programs that calculate action probabilities from current inputs.   Neural Networks are used in the TD-Q algorithm, which uses linear neural networks as evaluation functions (EFs) to map input/action pairs to expected reward.
Theory.   Explanation: The paper focuses on a theoretical problem of characterizing possible supply functions for a given dissipative nonlinear system, and provides a result that allows some freedom in the modification of such functions. The paper does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Neural Networks.   Explanation: The paper presents a novel supervised learning method that combines linear discriminant functions with neural networks. The proposed method results in a tree-structured hybrid architecture that uses component neural networks at the leaves of the tree to deal with subtasks. The growing and credit-assignment algorithms developed for the hybrid architecture provide an efficient way to apply existing neural networks for solving a large scale problem. The paper evaluates the performance of the proposed method on several benchmark classification problems and compares it with the multi-layered perceptron. Therefore, the paper belongs to the sub-category of Neural Networks in AI.
Rule Learning, Machine Learning.   The paper describes research aimed at applying machine learning techniques to the current knowledge engineering representations, specifically redesigning a part of a knowledge-based system called control knowledge. The authors claim a strong similarity between redesign of knowledge-based systems and incremental machine learning. While other sub-categories of AI may also be relevant to this research, Rule Learning and Machine Learning are the most directly related.
Case Based, Rule Learning  Explanation:  This paper belongs to the sub-category of Case Based AI because it discusses the use of lazy learners, which are a type of machine learning algorithm that relies on previously stored cases to make predictions. The paper also discusses the use of context-sensitive feature selection, which involves selecting relevant features based on the specific context of the problem being solved. This is a key aspect of case-based reasoning, as it involves selecting the most relevant cases to use as a basis for making predictions.  The paper also belongs to the sub-category of Rule Learning AI because it discusses the use of decision rules to guide the feature selection process. The authors propose a method for generating decision rules based on the context of the problem, which can then be used to select the most relevant features for a given task. This approach is similar to other rule-based machine learning algorithms, such as decision trees and rule induction, which use a set of rules to make predictions based on input data.
Neural Networks. This paper belongs to the Neural Networks sub-category of AI. The paper presents a new penalty term for neural networks that uses Principal Component Analysis to detect functional redundancy in the network. The paper also discusses how this new term can improve techniques that make use of a penalty term, such as weight decay, weight pruning, feature selection, Bayesian, and prediction-risk techniques, all of which are commonly used in neural network research.
Theory.   Explanation: The paper discusses the use of Walsh functions in predicting problem complexity, which is a theoretical approach to AI. The paper does not discuss any specific application or implementation of AI, such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Neural Networks, Genetic Algorithms.   Neural Networks: The paper introduces the DMP1 method, which is a type of neural network. It also compares the performance of different training methods for individual nodes in the network.  Genetic Algorithms: The paper discusses the use of a genetic algorithm for training individual nodes in the DMP1 network, and how it can enhance the convergence properties of the network.
Case Based, Theory.   Case-based reasoning is the main focus of the paper, as the authors describe the development and implementation of Kritik, a case-based design system. The paper also discusses the integration of case-based and model-based reasoning, which is a theoretical aspect of AI. The authors emphasize the importance of grounding the computational process of case-based reasoning in the SBF content theory of device comprehension, which is a theoretical framework for understanding how devices work.
Probabilistic Methods.   Explanation: The paper discusses learning Bayesian Networks, which is a probabilistic graphical model. The proposed algorithm uses Expectation-Maximization (EM) and Imputation techniques, which are probabilistic methods commonly used for handling missing data in Bayesian Networks. The title of the paper also includes the term "Bayesian Networks," which is a type of probabilistic model.
Probabilistic Methods, Reinforcement Learning  Probabilistic Methods: The paper discusses the use of Bayesian networks to represent uncertain knowledge in the domain of medical diagnosis and treatment planning. The authors describe how they use probabilistic inference to reason about the likelihood of different treatment options and their potential outcomes.  Reinforcement Learning: The paper also describes how the CHIRON system uses reinforcement learning to improve its performance over time. The system learns from feedback provided by medical experts and adjusts its decision-making process accordingly. The authors discuss how they use a combination of model-based and model-free reinforcement learning techniques to balance exploration and exploitation in the domain.
Reinforcement Learning, Theory.  Reinforcement learning is the primary sub-category of AI discussed in the paper. The authors propose using reinforcement learning techniques to enable agents to learn complimentary policies without any knowledge about each other. They also experimentally verify the effects of learning rate on system convergence and demonstrate the benefits of using learned coordination knowledge on similar problems.  Theory is also a relevant sub-category as the paper discusses formal models of conflict and cooperation among agent interests and analyzes the effects of learning rate on system convergence. The authors also discuss the potential benefits of using reinforcement learning-based coordination in domains with noisy communication channels and other stochastic characteristics.
Neural Networks, Rule Learning, Probabilistic Methods.   Neural Networks: The paper compares the performance of the error backpropagation (BP) and ID3 learning algorithms on the task of mapping English text to phonemes and stresses. The distributed output code developed by Sejnowski and Rosenberg is used to show that BP consistently out-performs ID3 on this task.  Rule Learning: The paper explores three hypotheses explaining the difference in performance between BP and ID3, one of which is that ID3 is overfitting the training data. The paper also suggests augmenting ID3 with a simple statistical learning procedure to improve its performance.  Probabilistic Methods: The paper suggests that BP captures statistical information that ID3 does not, and that more complex statistical procedures can improve the performance of both BP and ID3 substantially. The study of residual errors also suggests that there is still substantial room for improvement in learning methods for text-to-speech mapping.
Probabilistic Methods.   Explanation: The paper discusses the use of probabilistic models to learn the mapping from meaning to sounds in natural language processing. Specifically, the authors use a probabilistic model called a Hidden Markov Model (HMM) to learn the mapping. There is no mention of any other sub-category of AI in the text.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses the concept of "steady state" genetic algorithms and the advantages and disadvantages of replacing only a fraction of the population each generation.   Theory: The paper reviews and extends theoretical and empirical results related to the issue of overlapping generations in genetic algorithms.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the ability to recognize and adapt to changes in context, which requires probabilistic reasoning to some extent. For example, the meta-learner in the proposed model identifies potential contextual clues based on probabilistic reasoning.  Rule Learning: The paper presents a two-level learning model, where the base level learner performs regular on-line learning and classification, while the meta-learner identifies potential contextual clues. This meta-learner can be seen as a rule learner, as it learns to recognize certain patterns or attributes that indicate a change in context.
Case Based, Theory  Explanation:  - Case Based: The paper takes a case-based reasoning perspective and explores memory issues that influence long-term creative problem solving and design activity. The authors abstract Bell's reasoning and understanding mechanisms that appear time and again in long-term creative design, and identify that the understanding mechanism is responsible for analogical anticipation of design constraints and analogical evaluation, beside case-based design. The new mechanisms are integrated in a computational model, ALEC1, that accounts for some creative behavior. - Theory: The paper discusses the mechanisms of reasoning and understanding in creative design, and proposes a computational model that accounts for some creative behavior. The authors also draw on well-documented examples, such as the invention of the telephone by Alexander Graham Bell, to support their arguments.
Reinforcement Learning, Probabilistic Methods.   Reinforcement learning is the main focus of the paper, as it investigates the performance of cooperative agents compared to independent agents in a reinforcement learning setting. The paper also mentions the use of probabilistic methods in the context of sharing learned policies or episodes among agents, which can speed up learning at the cost of communication.
Genetic Algorithms.   Explanation: The paper explicitly mentions "Genetic Algorithms" in the title and in the abstract. The authors also provide their email addresses for correspondence related to the paper's submission to the 5th International Conference on Genetic Algorithms. The paper discusses the use of genetic algorithms for solving a specific problem related to directed acyclic graphs (DAGs). There is no mention of any other sub-category of AI in the text.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper describes the use of SFOIL, a descendant of FOIL, which is a top-down ILP system that uses the covering approach and advanced search strategies to learn rules from data.   Probabilistic Methods are also present in the text as the paper mentions the use of the advanced stochastic search heuristic in SFOIL, which is a probabilistic method for searching the space of possible rules.
Neural Networks, Theory.   Neural Networks: The paper describes a neural network architecture that models visual relative motion perception. The network uses a competitive neural circuit to bind visual elements together into a representation of a visual object, and information about the spiking pattern of neurons allows transfer of the bindings of an object representation from location to location in the neural circuit as the object moves.  Theory: The paper presents a theory of visual relative motion perception that explains how neural circuits can group moving visual elements relative to one another, based upon hierarchical reference frames. The theory is based on Gestalt common-fate principles and exploits information about the behavior of each group to predict the behavior of individual elements. The model exhibits characteristics of human object grouping and solves some key neural circuit design problems in visual relative motion perception.
Rule Learning, Theory.   Rule Learning is present in the paper as the authors examine the problem-solving behavior of existing redesign systems and approaches to come up with a collection of problem-solving methods for redesign. They also distinguish a number of dimensions along which redesign problem-solving methods can vary.   Theory is present in the paper as the authors present a knowledge-level analysis of redesign, viewing it as a family of methods based on common principles. They also propose extending the current notion of possible relations between tasks and methods in a PSM architecture to include the notions of task refinement and method refinement, which represent intermediate decisions in a task-method structure.
Probabilistic Methods.   Explanation: The paper discusses Bayesian density estimation and prediction using Dirichlet process mixtures of standard, exponential family distributions. The focus is on the precision or total mass parameter of the mixing Dirichlet process, which is a critical hyperparameter that strongly influences resulting inferences about numbers of mixture components. The paper proposes a flexible class of prior distributions for this parameter and shows how the posterior may be represented in a simple conditional form that is easily simulated. The paper also discusses the use of data augmentation and provides an asymptotic approximation to the posterior. All of these are characteristic of probabilistic methods in AI.
Neural Networks, Reinforcement Learning  This paper belongs to the sub-category of Neural Networks as it discusses the parallel training of Simple Recurrent Neural Networks (SRNNs). The paper explores different strategies for parallelizing the training process of SRNNs, which are a type of neural network that can process sequential data.   Additionally, the paper also touches upon the sub-category of Reinforcement Learning as it discusses the use of a reinforcement learning algorithm called Q-learning to optimize the training process of SRNNs. The authors propose a Q-learning-based approach to dynamically adjust the learning rate of the SRNNs during training, which can improve their convergence speed and accuracy.
Neural Networks.   Explanation: The paper focuses on unsupervised neural network learning procedures for feature extraction and classification. It discusses various neural network architectures and learning algorithms, such as self-organizing maps, adaptive resonance theory, and backpropagation. The paper does not discuss any other sub-categories of AI such as case-based reasoning, genetic algorithms, probabilistic methods, reinforcement learning, rule learning, or theory.
Neural Networks, Knowledge-Based Systems.   Neural Networks: The paper describes the use of neural networks to diagnose faults in local telephone loops and compares their performance to that of an expert system called MAX. The paper also discusses the use of neural network ensembles and their superior performance compared to standard neural networks.  Knowledge-Based Systems: The paper discusses the use of an expert system called MAX, which is a knowledge-based system, to aid human experts in diagnosing faults in local telephone loops. The paper also describes the use of knowledge-based neural networks, which incorporate expert knowledge into the neural network architecture, and their superior performance compared to standard neural networks.
Probabilistic Methods, Rule Learning  The paper belongs to the sub-category of Probabilistic Methods because it deals with one-sided random misclassification noise, which is a probabilistic phenomenon. The authors use a probabilistic model to estimate the probability of misclassification and incorporate it into their learning algorithm.  The paper also belongs to the sub-category of Rule Learning because the authors propose a rule-based approach to learning one-dimensional geometric patterns. They use a set of rules to generate candidate patterns and evaluate them based on their ability to fit the observed data. The authors also use a rule-based approach to handle misclassification noise, by modifying the rules to account for the possibility of misclassification.
Neural Networks.   Explanation: The paper describes a self-organizing neural network model that explains how developmental exposure to moving stimuli can direct the formation of horizontal trajectory-specific motion integration pathways. The model accounts for Burr's data and potentially other phenomena, such as visual inertia. Therefore, the paper belongs to the sub-category of AI that deals with neural networks.
Probabilistic Methods.   The paper provides a qualitative probabilistic analysis of intercausal reasoning and introduces the concept of product synergy to determine which form of reasoning is appropriate. The paper also extends the qualitative probabilistic network (QPN) formalism to support qualitative intercausal inference about the directions of change in probabilistic belief. Therefore, the paper is primarily focused on probabilistic methods in AI.
Theory.   Explanation: The paper presents theoretical results and techniques for learning in a specific model, without focusing on any specific application or implementation of AI. The paper does not discuss any specific algorithms or methods such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Genetic Algorithms, Neural Networks, Reinforcement Learning.   Genetic Algorithms: The paper describes the use of an evolutionary procedure to develop a set of behaviors for the robot without human intervention. This is achieved through the use of genetic algorithms, which are a type of evolutionary algorithm.  Neural Networks: The paper describes the use of a discrete-time recurrent neural network to control the robot. The emergent homing behavior is based on the autonomous development of an internal neural topographic map.  Reinforcement Learning: The paper describes the autonomous development of a set of behaviors for locating a battery charger and periodically returning to it. This is achieved through the use of reinforcement learning, which is a type of machine learning that involves training an agent to make decisions based on rewards and punishments.
Genetic Algorithms, Neural Networks.   Genetic programming is a special form of genetic algorithm, which is a sub-category of AI that involves the use of evolutionary algorithms to solve problems. The paper presents a new approach to constructing neural networks using genetic programming, which involves evolving the architecture and weights simultaneously without local weight optimization. Therefore, the paper belongs to the sub-category of Neural Networks as well.
Reinforcement Learning, Probabilistic Methods  This paper belongs to the sub-category of Reinforcement Learning as it discusses the problem of "greedy exploration" in RL algorithms. The paper proposes a new approach to address this problem by using probabilistic methods to estimate the value of different actions. The authors argue that this approach can lead to more efficient exploration and better decision-making in RL tasks. Therefore, the paper also belongs to the sub-category of Probabilistic Methods.
Case Based, Conceptual Clustering, Analogical Reasoning.   The paper belongs to the sub-category of Case Based AI as it discusses the use of past experiences (cases) to facilitate analogical reasoning. It also utilizes conceptual clustering, which is a method of organizing cases into case classes based on their similarities. Finally, the paper focuses on analogical reasoning, which involves using past experiences to solve new problems.
Probabilistic Methods. This paper belongs to the sub-category of probabilistic methods because it discusses the non-parametric estimation of probability density functions using weight functions. The paper proposes a new method that requires almost linear time and derives conditions for convergence under different metrics. The paper also compares the efficiency and accuracy of the proposed method with kernel-based estimators.
Neural Networks  Explanation: The paper proposes an active learning method specifically for multilayer perceptrons (MLP), which are a type of neural network. The paper discusses the singularity condition of an information matrix, which is a concept specific to neural networks. Therefore, this paper belongs to the sub-category of AI known as Neural Networks.
Neural Networks. This paper belongs to the sub-category of AI known as Neural Networks. The paper discusses the use of neural networks for adaptive control and estimation of nonlinear systems using gaussian radial basis functions. It also presents an algorithm for stable, on-line adaptation of output weights simultaneously with node configuration in a class of non-parametric models with wavelet basis functions. The paper focuses on the merging of concepts from nonlinear dynamic systems theory with tools from multivariate approximation theory to develop more efficient system representations while preserving global closed-loop stability.
Probabilistic Methods.   Explanation: The paper discusses the impact of bias on machine learning algorithms and proposes a method for quantifying stability based on a measure of agreement between concepts. This approach is based on the assumption of underlying probability distributions and is therefore related to probabilistic methods in AI. The other sub-categories listed (Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, Theory) are not directly mentioned or applicable to the content of the paper.
Genetic Algorithms. This paper belongs to the Genetic Algorithms sub-category of AI. The text discusses the use of Genetic Algorithms to find a (near-)optimal solution using a limited amount of computation. It proposes simultaneous tuning of the selective pressure and the disruptiveness of the recombination operators to find a good balance between exploration and exploitation. The experiments conducted in the paper also show the effectiveness of this approach.
Theory  Explanation: The paper discusses the use of cross-validation as a technique for estimating the accuracy of theories learned by machine learning algorithms. It does not focus on any specific sub-category of AI such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. Instead, it presents a theoretical analysis of the phenomenon observed during cross-validation and offers explanations for it. Therefore, the paper belongs to the sub-category of AI known as Theory.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper discusses the use of Multilayer Perceptrons (MLP), Radial Basis Function Networks (RBFNs), and Fuzzy Controllers as function approximators for (non-)linear controllers. It also describes the synthesis of network layout from a set of examples using symbolic and statistic learning algorithms.  Reinforcement Learning: The paper mentions the "peg-into-hole" task as a test case for learning controllers for a robot KUKA IR-361. This task involves the use of reinforcement learning to train the robot to perform the task efficiently.
Genetic Algorithms, Reinforcement Learning.   Genetic algorithms are used to evolve behaviors for robots, as stated in the abstract. The paper discusses how the learning is performed under simulation, and the resulting behaviors are then used to control the actual robot. This is an example of reinforcement learning, where the robot learns through trial and error to achieve a desired behavior.
Theory  Explanation: This paper belongs to the sub-category of AI called Theory. The paper explores how judgments of similarity and soundness can be modeled using SME, a simulation of Gentner's structure-mapping theory. The focus is on explicating several principles which psychologically plausible algorithms should follow, and introducing the Specificity Conjecture. The paper does not involve the use of Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Genetic Algorithms.   Explanation: The paper focuses on the use of genetic algorithms to solve an optimization problem derived from the 3-Conjunctive Normal Form problem. The paper discusses the use of parallel genetic algorithms and hill-climbing techniques to improve the quality of solutions. The majority of the paper is dedicated to discussing the implementation and effectiveness of genetic algorithms in solving the problem. Therefore, the paper belongs to the sub-category of AI known as Genetic Algorithms.
Reinforcement Learning, Neural Networks  This paper belongs to the sub-categories of Reinforcement Learning and Neural Networks. Reinforcement Learning is present in the paper as the authors propose a TD learning algorithm to learn game evaluation functions. The algorithm uses a reward signal to update the weights of the neural network. Neural Networks are also present in the paper as the authors use a hierarchical neural architecture to learn the game evaluation functions. The architecture consists of multiple layers of neural networks, with each layer learning a different level of abstraction.
Neural Networks.   Explanation: The paper describes ICSIM, a simulator for structured connectionism, which is a type of neural network. The paper discusses the need for flexibility and efficiency in designing and reusing modular substructures, which are important considerations in neural network design. The paper also describes the use of object-oriented programming, which is a common approach to implementing neural networks.
Theory.   Explanation: The paper presents an average-case analysis of a simple algorithm for inducing one-level decision trees, and derives the expected classification accuracy over the entire instance space based on various domain parameters. The focus is on theoretical results and their impact on practice, rather than on specific AI sub-categories such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper compares the performance of several machine learning algorithms, including the semi-naive Bayesian classifier, which is a probabilistic method. The paper also discusses the combination of decisions of several classifiers, which is a probabilistic approach.  Rule Learning: The paper discusses the Assistant-I and Assistant-R algorithms for top-down induction of decision trees using information gain and RELIEFF as search heuristics, respectively. These algorithms are examples of rule learning. The paper also analyzes the explanation ability of different classifiers, which is a key aspect of rule learning.
Theory  Explanation: The paper describes modifications to the Structure-Mapping Engine (SME) algorithm to make it more efficient and relevant to an analogizer's goals. It does not use any specific sub-category of AI such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning. Instead, it focuses on theoretical analysis and modifications to an existing algorithm. Therefore, the paper belongs to the sub-category of AI called Theory.
Probabilistic Methods, Theory  The paper belongs to the sub-category of Probabilistic Methods because it discusses the use of Bayesian methods for estimating the variance and bias of the WaveShrink algorithm. The authors use a Bayesian framework to derive posterior distributions for the variance and bias parameters, which allows them to make probabilistic statements about the performance of the algorithm.  The paper also belongs to the sub-category of Theory because it presents a theoretical analysis of the WaveShrink algorithm. The authors derive upper bounds on the mean-squared error of the algorithm and use these bounds to guide the selection of the variance and bias parameters. They also provide a detailed discussion of the assumptions underlying the analysis and the implications of these assumptions for the practical use of the algorithm.
Theory.   Explanation: The paper focuses on the theoretical properties and analysis of the WaveShrink procedure and its new semisoft shrinkage scheme. It does not involve the implementation or application of any specific AI techniques such as neural networks or reinforcement learning.
This paper belongs to the sub-category of AI called Neural Networks. Neural networks are mentioned in the title of the paper and are the focus of the algorithm described in the paper. The paper discusses using active data collection to improve the accuracy of neural networks in feasibility studies.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses Relief and its extension Re-liefF, which are statistical methods that estimate the quality of attributes in classification problems with strong dependencies between attributes. These methods exploit local information provided by different contexts to provide a global view and recognize contextual attributes. This approach involves probabilistic methods to estimate attribute quality.  Rule Learning: The paper discusses how Relief and its extension Re-liefF are capable of recognizing contextual attributes. This involves learning rules that capture the dependencies between attributes and how they affect the classification problem. The paper also introduces Regressional ReliefF (RReliefF), which provides a unified view on estimating attribute quality and can be used for non-myopic learning of the regression trees. This approach involves rule learning to estimate attribute quality.
Genetic Algorithms.   Explanation: The paper explicitly discusses Genetic Algorithms as the main topic and describes their basic working scheme as developed by Holland. The proposed extensions are also based on the second-level learning principle for strategy parameters as introduced in Evolution Strategies, which is related to Genetic Algorithms. The other sub-categories of AI are not mentioned or discussed in the paper.
Rule Learning, Theory.   Rule Learning is present in the text as the paper discusses the extension of Inductive Logic Programming (ILP) to Abductive Concept Learning (ACL), which is a rule-based learning framework.   Theory is also present in the text as the paper presents a theoretical framework for integrating abduction and induction into a common learning framework through the notion of ACL. The paper discusses the main characteristics of ACL and illustrates its potential in addressing several problems in ILP. The paper also develops an algorithm for ACL and integrates it with an abductive proof procedure for Abductive Logic Programming (ALP). The paper investigates the particular role of integrity constraints in ACL and shows how ACL is a hybrid learning framework that integrates the explanatory and descriptive settings of ILP.
Reinforcement Learning, Probabilistic Methods.   Reinforcement learning is the main focus of the paper, as it discusses the Markov decision process (MDP) formalization of reinforcement learning and proposes a Q-learning-like algorithm for finding optimal policies in a multi-agent setting.   Probabilistic methods are also present, as the environment in the Markov games framework is defined by a probabilistic transition function, and the optimal policy in the simple two-player game described in the paper is probabilistic.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper discusses the use of Genetic Programming, which is a type of Genetic Algorithm, for automatically generating functions and algorithms through natural selection.   Reinforcement Learning: The paper also discusses how the softbots have learned on their own how to play a reasonable game of soccer, which is a characteristic of Reinforcement Learning.
Genetic Algorithms, Neural Networks, Reinforcement Learning.   Genetic Algorithms: The paper discusses the use of evolution as a means to program control for the robots. This involves the use of genetic algorithms to evolve the neural networks that control the robots' behavior.  Neural Networks: The paper specifically mentions the use of artificial neural networks to control the wandering behavior of the robots. The evolved neural networks are used to determine the robots' movements and actions.  Reinforcement Learning: The task given to the robots is to touch as many squares in a grid as possible during a fixed period of time. This is a form of reinforcement learning, where the robots receive feedback in the form of a score based on their performance. The neural networks are evolved to maximize this score, which is a form of reinforcement learning.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper utilizes Genetic Programming to evolve behavioral strategies for the predator agents.   Reinforcement Learning: The paper discusses the expected competitive learning cycle between the predator and prey populations, which is a form of reinforcement learning. Additionally, the predator and prey populations are allowed to evolve simultaneously, which can be seen as a form of reinforcement learning.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms are discussed extensively in the paper as one of the main representatives of algorithms based on the model of natural evolution. The paper explains their basic working mechanisms, differences from Evolution Strategies, and application possibilities.   Probabilistic Methods are also mentioned in the paper, particularly in the context of Evolution Strategies. The paper emphasizes the mechanism of self-adaptation of strategy parameters within Evolution Strategies, which is a probabilistic method that allows for an on-line adaptation of strategy parameters without exogenous control.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods: The paper discusses the use of the minimum expected cost criterion to select the prediction class, which is a probabilistic approach to cost-sensitive classification.  Rule Learning: The paper explores boosting techniques for decision tree classification, which is a form of rule learning.
Reinforcement Learning.   Explanation: The paper is specifically about a novel method for multi-agent reinforcement learning, and the abstract mentions that traditional reinforcement learning algorithms are not well-suited for this task. The paper goes on to describe the "incremental self-improvement" method, which is a reinforcement learning algorithm that allows each animat to improve its own policy over time. While other sub-categories of AI may be involved in the implementation of this method, such as neural networks or probabilistic methods, the focus of the paper is on reinforcement learning.
Genetic Algorithms.   Explanation: The paper discusses the breeder genetic algorithm (BGA) and proposes a modification to it using competing subpopulations. The paper presents numerical results for a number of test functions, which are commonly used in evaluating the performance of genetic algorithms. The use of genetic operators and control parameters is also mentioned, which are key components of genetic algorithms. Therefore, this paper belongs to the sub-category of Genetic Algorithms in AI.
Rule Learning, Theory.   Explanation: The paper presents a computational approach to the acquisition and application of problem schemes, which relies on the concept of recursive program schemes. This approach can be seen as a form of rule learning, where the system learns to apply certain rules or procedures to solve problems. Additionally, the paper proposes a theoretical framework to describe human problem solving and learning in a formal way, which falls under the category of theory in AI. The other sub-categories of AI (Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning) are not directly relevant to the content of the paper.
Genetic Algorithms, Theory.   Genetic Algorithms is the primary sub-category of AI that this paper belongs to, as it focuses on the use and performance of GAs in artificial-life systems. The paper proposes a strategy for understanding the types of fitness landscapes that lead to successful GA performance, and presents experimental results on the role of crossover and building blocks on these landscapes.   Theory is also a relevant sub-category, as the paper aims to provide a theoretical basis for characterizing fitness landscapes and understanding GA performance. The authors propose a set of features of fitness landscapes that are relevant to the GA, and experimentally study how different configurations of these features affect GA performance.
Neural Networks, Optimal Experiment Design, Theory.   Neural Networks: The paper focuses on the query/action selection of a neural network learner.   Optimal Experiment Design: The paper applies techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner.   Theory: The paper builds on the theoretical results of Fedorov [1972] and MacKay [1992] and concludes that OED-based query/action has much to offer.
This paper belongs to the sub-category of AI called Case Based.   Explanation:  The title of the paper, "CBET: a Case Base Exploration Tool," suggests that the focus of the paper is on case-based reasoning. The abstract also mentions that the tool is designed to "support the exploration of case bases," further emphasizing the case-based approach. The paper describes how the CBET tool uses case-based reasoning to help users explore and analyze large case bases. The other sub-categories of AI (Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, Rule Learning, Theory) are not mentioned in the text and are therefore not applicable.
Case Based, Theory.   Case-based AI is relevant because the paper proposes a hybrid architecture combining case-based and model-based diagnostic problem solving. The paper also presents a theoretical complexity analysis, which falls under the category of theory in AI.
Neural Networks, Self-Organizing Feature Map.   Explanation: The paper discusses the use of a self-organizing feature map, which is a type of neural network, to grow a hypercubical output space. The authors describe how the network is trained to organize input data into a high-dimensional output space, and how this space can be expanded to accommodate new data. The paper does not discuss any other sub-categories of AI.
Neural Networks.   Explanation: The paper discusses the representation of hidden variable models using attractor neural networks and explores the use of these networks for pattern analysis and synthesis. The entire paper is focused on the use of neural networks for this purpose, and there is no mention of any other sub-category of AI.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses decision graphs as an extension of decision trees, which are a common probabilistic method used in AI. Decision graphs are also described as a way to represent probability distributions over decision variables.  Rule Learning: Decision trees are a type of rule learning algorithm, and decision graphs can be seen as an extension of this approach. The paper discusses how decision graphs can be used to represent complex decision rules, and how they can be learned from data using techniques such as maximum likelihood estimation.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as the method of "reinforcement driven information acquisition" is developed and tested. The paper also draws on concepts from information theory to implement this method, which falls under the category of Theory.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper mentions that the LBG-U method is inspired by neural networks, specifically the Kohonen self-organizing map. The authors explain how they modified the LBG algorithm to incorporate the self-organizing map concept, resulting in the LBG-U method.   Probabilistic Methods: The paper discusses the use of probability distributions in the LBG-U method, specifically the Gaussian mixture model. The authors explain how the LBG-U method uses the Gaussian mixture model to model the probability distribution of the input data, which is then used to generate the codebook.
Theory.   Explanation: The paper focuses on the theoretical problem of learnability with membership queries in the presence of incomplete information, and proposes a learning algorithm using split graphs and hypergraphs. There is no mention or application of any of the other sub-categories of AI listed.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper describes a model for the average computing time of a KADS knowledge-based system based on its structure. This model is based on probabilistic methods, as it takes into account the probability of each rule being executed and the probability of each inference path being followed.  Rule Learning: The paper discusses the use of a cost-model in designing a knowledge-based system. This cost-model is based on the structure of the system, which includes the rules used in the system. Therefore, the paper is related to the sub-category of Rule Learning.
Case Based, Reinforcement Learning  Explanation:  - Case Based: The paper discusses planning by retrieving and adapting past planning cases, which is a key characteristic of case-based reasoning. The Prodigy/Analogy system mentioned in the paper combines generative and case-based planning.  - Reinforcement Learning: Although not explicitly mentioned, the paper discusses the potential for joint cooperation between human and machine planners to achieve better plans than either could create alone. This is a form of reinforcement learning, where the machine learns from the human's input and adjusts its planning accordingly.
Rule Learning, Theory.   The paper discusses the issue of consistency in concept learning, which falls under the category of rule learning. The authors propose a novel approach that directly addresses consistency, which is a theoretical contribution to the field.
Neural Networks.   Explanation: The paper proposes a solution for blind separation of sources using multi-layer neural networks with adaptive learning algorithms. The entire paper is focused on the development and application of neural networks for this problem, making it the most related sub-category of AI.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper describes a learning algorithm for a network that performs online stochastic gradient ascent. The network is calibrated to the higher-order moments of the input density functions, and it factorises the input into independent components.   Probabilistic Methods: The algorithm is derived from the mutual information objective, which involves maximising the mutual information between outputs and inputs of the network. The paper also mentions the minimisation of mutual information between outputs, as well as maximising their individual entropies. The example application of blind separation of speech signals also involves probabilistic modelling of the input signals.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms are present in the text as the paper discusses experiments on adding memory to XCS, which is a type of classifier system that uses genetic algorithms to evolve rules. The paper explores the effects of adding memory to XCS and how it affects the performance of the system.  Reinforcement Learning is also present in the text as XCS is a type of reinforcement learning algorithm that uses a reward signal to learn optimal behavior. The paper discusses how adding memory to XCS can improve its ability to learn and adapt to changing environments.
Probabilistic Methods.   Explanation: The paper presents a method for maintaining mixtures of prunings of a prediction or decision tree, which involves an efficient online weight allocation algorithm that can be used for prediction, compression, and classification. The algorithm maintains correctly the mixture weights for edge-based prunings with any bounded loss function, and a similar algorithm is given for the logarithmic loss function with a corresponding weight allocation algorithm. These techniques are all related to probabilistic methods in AI, which deal with uncertainty and probability distributions over possible outcomes.
Probabilistic Methods.   Explanation: The paper discusses Markov chain Monte Carlo (MCMC) methods, which are a type of probabilistic method commonly used in Bayesian statistics. The paper specifically focuses on convergence rates for MCMC algorithms, which is a key aspect of probabilistic methods.
Probabilistic Methods.   Explanation: The paper discusses uncertainty in inferences and arguments, which is a key concept in probabilistic reasoning. The authors explore the significance of uncertainty in the premises and conclusion of an argument, and argue that uncertainty can be incorporated into deductive arguments, but this is not reflective of human argumentation and can be computationally costly. The paper does not discuss any other sub-categories of AI.
Probabilistic Methods, Genetic Algorithms.   Probabilistic Methods: The paper discusses stochastic search algorithms that sample the search space with respect to a probability distribution, which is updated based on previous samples and a predefined strategy. This is a fundamental mechanism of probabilistic methods.  Genetic Algorithms: The paper mentions Genetic Algorithms (GAs) as an instance of the stochastic search algorithm paradigm based on global random search. The paper also discusses SAGE, a search algorithm based on the same fundamental mechanisms as GAs.
Probabilistic Methods.   Explanation: The paper analyzes a hierarchical Bayes model and uses a Gibbs sampler to estimate the posterior distribution. This falls under the category of probabilistic methods, which involve modeling uncertainty using probability distributions and using them to make predictions or decisions.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses the choice of problem representation in genetic algorithms and how it affects the search process. It also mentions a metric designed to measure complexity with respect to a genetic algorithm.   Theory: The paper explores the general properties of representations and their relationship to neighborhood search methods. It also discusses the No Free Lunch theorem, which is a theoretical result in optimization and search algorithms.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper investigates the effectiveness of connectionist networks for predicting temporal sequences. The method of weight-elimination is used to address the problem of overfitting in back-propagation.   Probabilistic Methods: The ultimate goal is prediction accuracy, and the paper analyzes two time series - sunspot series and currency exchange rates - using connectionist networks. The paper also discusses the addition of a term penalizing network complexity to the cost function in back-propagation.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses the use of genetic programming (GP) populations to solve the MAX problem, which is a classic problem in genetic algorithms. The paper analyzes the evolution of the GP populations and the impact of crossover and program size restrictions on the convergence to suboptimal solutions.  Theory: The paper presents theoretical models to explain the behavior of the GP populations and compares them with actual runs. The paper also confirms the basic message of a previous study and shows that evolution from suboptimal solutions to the optimal solution is possible if sufficient time is allowed.
Rule Learning, Theory.   The paper belongs to the sub-category of Rule Learning because it deals with Inductive Logic Programming (ILP), which is a subfield of machine learning that focuses on learning rules from examples. The paper discusses the operations of generalization and specialization, which are fundamental to ILP.   The paper also belongs to the sub-category of Theory because it provides a systematic treatment of the existence or non-existence of least generalizations and greatest specializations of finite sets of clauses in different ordered languages. The paper surveys results obtained by others and contributes some new results, which are based on theoretical analysis.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses the use of causal probabilistic networks in blood group determination of Danish Jersey cattle. The authors compare the performance and comprehensibility of different machine learning algorithms with causal probabilistic networks.   Neural Networks: The paper also discusses the use of different machine learning algorithms, including neural networks, in blood group determination of Danish Jersey cattle. The authors compare the performance and comprehensibility of these algorithms with causal probabilistic networks.
Probabilistic Methods.   Explanation: The paper discusses Bayesian inference and computation in various state-space models, which is a probabilistic method for time series analysis. The paper also discusses the development of non-linear models based on stochastic deformations of time scales, which is another probabilistic approach.
Probabilistic Methods.   Explanation: The paper discusses Bayesian modelling efforts in time series analysis, which is a probabilistic approach to modelling. The paper also mentions non/semi-parametric models and robustness issues, which are related to probabilistic methods in terms of model flexibility and handling uncertainty.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents an architecture consisting of competing neural networks for the unsupervised segmentation of data streams.   Probabilistic Methods: Memory is included in the architecture to resolve ambiguities of input-output relations. The competition is adiabatically increased during training to obtain maximal specialization. The method achieves almost perfect identification and segmentation in the case of switching chaotic dynamics where input manifolds overlap and input-output relations are ambiguous. Applications to time series from complex systems demonstrate the potential relevance of the approach for time series analysis and short-term prediction.
Neural Networks, Probabilistic Methods, Theory.  Neural Networks: The paper discusses differential learning for statistical pattern classification, which is based on the classification figure-of-merit (CFM) objective function. This function is used to train neural network classifiers.  Probabilistic Methods: The paper mentions Bayesian discrimination, which is a probabilistic method for classification. Differential learning is said to require the least classifier complexity necessary for Bayesian discrimination.  Theory: The paper proves that differential learning is asymptotically efficient and guarantees the best generalization allowed by the choice of hypothesis class as the training sample size grows large. It also states that differential learning almost always guarantees the best generalization allowed by the choice of hypothesis class for small training sample sizes. These are theoretical results.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the study aimed to identify potentially useful decision and regression trees generated by machine learning algorithms. This is a type of rule learning where decision trees are generated to predict survival time based on the attributes of the patients.   Probabilistic Methods are also present in the text as the study aimed to assess the relative importance of the factors that might predict survival of patients with anaplastic thyroid carcinoma. This involves calculating probabilities of survival based on the different attributes of the patients.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as the authors study the behavior of a family of learning algorithms based on Sutton's method of temporal differences, which is a popular approach in reinforcement learning. The paper analyzes the performance of these algorithms in an on-line learning framework, where the goal is to estimate a discounted sum of all the reinforcements that will be received in the future.   Theory is also a relevant sub-category, as the paper provides general upper and lower bounds on the performance of the learning algorithms, without making any statistical assumptions about the process producing the training sequence. The authors also analyze the closely related problem of learning to predict in a model where the learner must produce predictions for a whole batch of observations before receiving reinforcement.
Genetic Algorithms.   Explanation: The paper discusses the use of Genetic Algorithms (GA) and proposes a mechanism called Dynamic Parameter Encoding (DPE) to improve the efficiency and precision of GA. The paper also explores the problem of premature convergence in GAs through two convergence models. There is no mention of other sub-categories of AI such as Case Based, Neural Networks, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Genetic Algorithms.   Explanation: The paper is specifically about genetic algorithms and their use of crossover operators. It discusses the benefits of different types of crossover operators and proposes an adaptive genetic algorithm that can determine which form of crossover is optimal for a given problem. The other sub-categories of AI listed (Case Based, Neural Networks, Probabilistic Methods, Reinforcement Learning, Rule Learning, Theory) are not directly related to the content of the paper.
Genetic Algorithms.   Explanation: The paper explicitly mentions the use of genetic programming techniques to evolve edge detectors, which falls under the category of genetic algorithms. The authors use a fitness function to evaluate the performance of the evolved detectors and select the best individuals for reproduction, which is a key aspect of genetic algorithms. The paper does not mention any other sub-categories of AI.
Probabilistic Methods.   Explanation: The paper discusses the use of Gibbs sampling, which is a probabilistic method, to estimate cointegrating relations and their weights in a VAR system. The Bayesian perspective also involves probabilistic reasoning.
Neural Networks, Reinforcement Learning  Explanation:  This paper belongs to the sub-category of Neural Networks as it discusses the use of Radial Basis Function Networks (RBFN) to improve performance. RBFN is a type of neural network that uses radial basis functions as activation functions. The paper proposes a method to improve the performance of RBFN by learning center locations.  Additionally, the paper also involves Reinforcement Learning as it discusses the use of a reinforcement learning algorithm to optimize the center locations of the RBFN. The algorithm is used to find the optimal center locations that minimize the error between the predicted output and the actual output.
Probabilistic Methods.   Explanation: The paper describes a Monte Carlo method, which is a probabilistic method for approximating solutions to decision analysis problems. The authors define an artificial distribution on the product space of alternatives and states, and use Markov chain Monte Carlo simulation to draw samples from this distribution. They then use exploratory data analysis tools to identify the optimal alternative based on the mode of the implied marginal distribution on the alternatives. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper describes a new sampling-based heuristic for tree search named SAGE, which is a probabilistic method. The algorithm uses random sampling to explore the search space and make decisions based on probabilities.  Rule Learning: The paper is focused on the problem of grammar induction, which is a type of rule learning. The goal is to learn a set of rules that can generate a given language. The SAGE algorithm is designed to search for these rules in a probabilistic way.
Sub-category: Case Based  Explanation: The paper belongs to the sub-category of Case Based AI. The paper describes the use of conversational case-based reasoning (CCBR) to assist in problem solving tasks. CCBR is a form of case-based reasoning where users initiate problem solving conversations by entering an initial problem description in natural language text. The CCBR system then assists in eliciting refinements of this description and in suggesting solutions. The paper also discusses the integration of NaCoDAE with other reasoning approaches such as machine learning, model-based reasoning, and generative planning modules to enhance the inferencing behaviors of the CCBR system.   Sub-category: Model-Based Reasoning  Explanation: The paper discusses the integration of NaCoDAE with model-based reasoning modules to enhance the inferencing behaviors of the CCBR system. Model-based reasoning is a sub-category of AI that involves the use of models to reason about the behavior of a system or process. In the context of the paper, the model-based reasoning module would be used to reason about the behavior of the CCBR system and assist in the generation of solutions to the user's problem.   Sub-category: Machine Learning  Explanation: The paper discusses the integration of NaCoDAE with machine learning modules to enhance the inferencing behaviors of the CCBR system. Machine learning is a sub-category of AI that involves the use of algorithms to learn patterns in data and make predictions or decisions based on that learning.
Genetic Algorithms.   Explanation: The paper proposes a novel approach to constructing cooperation strategies using the Genetic Programming (GP) paradigm, which is a class of adaptive algorithms used to evolve solution structures that optimize a given evaluation criterion. The approach is based on designing a representation for cooperation strategies that can be manipulated by GPs. The experiments presented in the paper also show promising results using this approach. Therefore, the paper belongs to the sub-category of Genetic Algorithms in AI.
Genetic Algorithms.   Explanation: The paper explicitly mentions the use of genetic programming technique to evolve programs to control an autonomous agent. The agents are run through random environment configurations and randomly generated programs are recombined to form better programs. The fitness of each agent is determined by interpreting the associated program. The paper also discusses the success of the genetic programming technique in generating programs that enable an agent to handle any possible environment. There is no mention of any other sub-category of AI in the text.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper describes a methodology that involves a form of simulated evolution for building autonomous robots. This approach is based on genetic algorithms, which are used to evolve the controller of the robot in simulation.   Reinforcement Learning: The paper also mentions that the robot is trained to locate, recognize, and grasp a target object. This training is done using reinforcement learning, which involves providing the robot with feedback on its actions and adjusting its behavior accordingly.
Probabilistic Methods, Theory  Probabilistic Methods: The paper discusses the use of Bayesian networks and Markov models for model selection based on minimum description length. These are probabilistic methods that involve calculating probabilities and likelihoods to determine the best model.  Theory: The paper is based on the Minimum Description Length (MDL) principle, which is a theoretical framework for model selection. The MDL principle states that the best model is the one that minimizes the length of the description of the data and the model itself. The paper applies this principle to the selection of models in AI.
Genetic Algorithms.   Explanation: The paper discusses the use of genetic algorithms for optimizing the encoding and crossover of chromosomes in order to preserve geographical gene linkages. It references previous work on hyperplane synthesis in genetic algorithms and discusses the use of DFS-row-major reembedding for multi-dimensional encodings. The paper does not discuss any other sub-categories of AI.
Reinforcement Learning, Probabilistic Methods, Theory.   Reinforcement learning is the main focus of the paper, as the authors propose and analyze a new learning algorithm for partially observable Markov decision problems.   Probabilistic methods are also present, as the algorithm operates in the space of stochastic policies, which can yield a policy that performs considerably better than any deterministic policy.   Finally, the paper also belongs to the Theory sub-category, as it discusses the theoretical analysis of reinforcement learning algorithms in Markov environments and proposes a new algorithm for non-Markov decision problems.
Probabilistic Methods.   Explanation: The paper discusses Markov chain Monte Carlo (MCMC) algorithms, which are a type of probabilistic method used in Bayesian practice. The paper specifically focuses on constructing MCMC algorithms for hierarchical longitudinal models, which are statistical models that involve repeated measurements over time. The authors explore different blocking strategies to improve convergence and reduce autocorrelation in MCMC samples. Overall, the paper is primarily concerned with probabilistic methods for analyzing longitudinal data.
Genetic Algorithms, Rule Learning.   Genetic Algorithms are mentioned in the title and abstract as one of the two feature selection methods being compared. The paper discusses the strengths and limitations of this method in comparison to the Importance Score method.   Rule Learning is not explicitly mentioned, but the Importance Score method is described as a "greedy-like search," which involves iteratively selecting the best feature based on a set of rules or criteria. Therefore, the paper can be seen as discussing the effectiveness of rule-based approaches to feature selection.
Neural Networks, Rule Learning.   Neural Networks: The paper describes a new connectionist architecture called Simple Synchrony Networks (SSNs), which combines Simple Recurrent Networks (SRNs) with Temporal Synchrony Variable Binding (TSVB) to learn about patterns across time. The SSN is a type of neural network.  Rule Learning: The paper reports on experiments in language learning using a recursive grammar, where the network is trained on sentences with specific rules and restrictions on constituent classes. The SSN is able to learn and generalize these rules to sentences with more complex structures and unrestricted constituent classes. This demonstrates the ability of the SSN to learn and apply rules, making it a type of rule learning algorithm.
Genetic Algorithms, Rule Learning.   Genetic Algorithms: The paper uses machine-language genetic programming with crossover as one of the directed search techniques to learn recursive sequences. This involves evolving programs through genetic operations such as mutation and crossover.  Rule Learning: The paper describes the process of discovering programs that exactly reproduce a given finite prefix of a sequence and correctly produce the remaining sequence up to the underlying machine's precision. This involves learning rules or patterns in the sequence data. Additionally, the machine-language representation used in the paper contains instructions for arithmetic, register manipulation and comparison, and control flow, which can be seen as rules for manipulating data and controlling program flow.
Theory. This paper belongs to the Theory sub-category of AI. The paper establishes the desired implication for analytic systems in several cases, studies accessibility properties of the "control sets" recently introduced in the context of dynamical systems studies, and provides various examples and counterexamples relating to the various Lie algebras introduced in past work. These are all theoretical aspects of AI. The other sub-categories of AI, such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, and Reinforcement Learning, are not present in the text.
Probabilistic Methods.   Explanation: The paper discusses the theory and application of Bayesian networks, which are a type of probabilistic graphical model used for representing and reasoning about uncertain knowledge. The paper also introduces the concept of causal networks, which are a type of graphical model used for representing and reasoning about causal relationships between variables. Both Bayesian networks and causal networks are examples of probabilistic methods in AI.
Theory.   Explanation: The paper presents a theoretical algorithm for learning a specific function class using quantum computation. It does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning. The focus is on exploring the theoretical possibility of using quantum computation to improve the efficiency of learning algorithms.
Probabilistic Methods.   Explanation: The paper discusses the problem of efficient probabilistic inference in Bayesian belief networks and proposes combinatorial optimization techniques to solve it. The paper does not mention any other sub-category of AI.
Probabilistic Methods, Theory  Probabilistic Methods: This paper belongs to the sub-category of probabilistic methods as it uses statistical simulation to model superscalar processors. The authors use probability distributions to model the behavior of the processor and its components, such as the instruction queue, the reservation stations, and the functional units. They also use statistical analysis to evaluate the performance of the processor under different workloads and configurations.  Theory: This paper also belongs to the sub-category of theory as it proposes a new modeling approach for superscalar processors based on statistical simulation. The authors develop a mathematical model that captures the behavior of the processor and its components, and they use this model to generate synthetic workloads and evaluate the performance of the processor. They also compare their results with those obtained from other modeling approaches, such as queuing theory and simulation-based methods.
Probabilistic Methods, Rule Learning  Probabilistic Methods: The paper uses a dynamic-programming distance to calculate the distance between each pair of segments, which is a probabilistic method.  Rule Learning: The paper introduces a novel self-organized cross-validated clustering algorithm, which is a rule learning method. The resulting hierarchical tree of clusters offers a new representation of protein sequences and families, which compares favorably with the most updated classifications based on functional and structural protein data. Motifs and domains such as the Zinc Finger, EF hand, Homeobox, EGF-like and others are automatically correctly identified. A novel representation of protein families is introduced, from which functional biological kinship of protein families can be deduced, as demonstrated for the transporters family.
Rule Learning, Theory.   Rule Learning is present in the paper as the authors describe the use of task-method-knowledge models and structure-behavior-function models to explain design reasoning and device designs, respectively. These models are based on rules that represent the knowledge and methods used by the system.   Theory is also present in the paper as the authors discuss the importance of explanation in building computer-based interactive design environments and analyze the content of explanations of design reasoning and design solutions. They also describe the use of a computer program, INTERACTIVE KRITIK, which uses these representations to visually illustrate the system's reasoning and the result of a design episode. This analysis and use of a computer program is based on theoretical concepts and principles.
Neural Networks.   Explanation: The paper describes the implementation of the backpropagation algorithm, which is a commonly used algorithm for training neural networks. The title also specifically mentions a "BP Neural Network Simulator." While the paper does mention other AI-related topics such as parallel programming and performance optimization, the main focus is on the implementation of a neural network simulator using object-oriented design.
Reinforcement Learning, Neural Networks - The paper discusses the use of neural network reinforcement learning techniques to build an adaptive control system for home comfort systems. The system is designed to infer appropriate rules of operation based on the lifestyle of the inhabitants and energy conservation goals. The residence is equipped with sensors and actuators to provide information about environmental conditions and control various systems.
Neural Networks, Case Based.   Neural Networks: The paper discusses the use of Machine Learning techniques such as Fuzzy Controllers, MLPs, and RBFNs for generating non-linear controllers. These techniques are all subcategories of Neural Networks.  Case Based: The paper describes the use of integrated learning algorithms for two experimental test cases, one involving an industrial robot and the other a prediction task on a chaotic series. The use of examples to generate controllers is a characteristic of Case Based reasoning.
Genetic Algorithms, Neural Networks, Probabilistic Methods.   Genetic Algorithms (GAs) are mentioned as one of the four problem-solving technologies that make up Soft Computing (SC). The paper discusses the use of GAs to evolve neural networks (NNs) and to tune fuzzy logic (FL) controllers.   Neural Networks (NNs) are also mentioned as one of the four problem-solving technologies that make up SC. The paper discusses the use of NNs as controllers tuned by backpropagation-type algorithms and the use of GAs to evolve NNs.   Probabilistic Methods are mentioned as one of the four problem-solving technologies that make up SC. The paper discusses the use of probabilistic reasoning (PR) as a complementary method to solve complex, real-world problems.
Probabilistic Methods.   Explanation: The paper discusses the use of dynamic belief networks (DBNs) for monitoring walking, fall prediction, and detection. DBNs are a type of probabilistic graphical model that can represent uncertain relationships between variables over time. The paper describes how the DBN is constructed and how it is used to predict the likelihood of a fall based on various sensor inputs. The probabilistic nature of the DBN allows for uncertainty to be accounted for in the prediction and detection of falls. Therefore, this paper belongs to the sub-category of Probabilistic Methods in AI.
Reinforcement Learning, Neural Networks.   Reinforcement learning is present in the paper as the authors propose a reinforcement learning framework for the robot to learn control laws in local environments. The reinforcement function is generated from the sensory inputs of the robot before and after a control action is taken.   Neural networks are also present in the paper as the authors propose that the robot learns the control law in terms of a neural network within the reinforcement learning framework.
Probabilistic Methods.   Explanation: The paper discusses belief maintenance in Bayesian networks, which are probabilistic graphical models that represent uncertain relationships between variables. The paper focuses on how to update beliefs in these networks as new evidence is observed, which is a key aspect of probabilistic reasoning. While other sub-categories of AI may also be relevant to this topic (such as reinforcement learning for decision-making in uncertain environments), probabilistic methods are the most directly applicable.
Neural Networks.   Explanation: The paper specifically focuses on the implementation of artificial neural networks (ANNs) on parallel machines, and discusses the challenges and desired characteristics for such implementations. While other sub-categories of AI may be relevant to ANNs (such as genetic algorithms for optimizing ANN parameters), the primary focus of this paper is on the parallel implementation of ANNs.
Probabilistic Methods.   Explanation: The paper discusses an extension of Fill's exact sampling algorithm, which is a probabilistic method used for generating samples from a given probability distribution. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Theory.   Explanation: The paper focuses on deriving distribution-free uniform test error bounds for validation, which is a theoretical aspect of machine learning. The paper does not discuss any specific AI techniques or algorithms such as neural networks, reinforcement learning, or rule learning.
Neural Networks.   Explanation: The paper provides a brief history and overview of connectionist research, which is a subfield of artificial intelligence that focuses on neural networks. The paper discusses the different types of network architectures and learning rules used in current research, which are key components of neural network models. The paper also suggests that neural network research should incorporate functional principles inherent in neurobiological systems, further emphasizing the connection to neural networks.
Probabilistic Methods.   Explanation: The paper proposes a probabilistic axiomatization of measurement called ISOP (isotonic ordinal probabilistic models) and discusses related work in nonparametric latent variable and item response modeling. The paper does not discuss case-based, genetic algorithms, neural networks, reinforcement learning, or rule learning.
Probabilistic Methods.   Explanation: The paper focuses on the characterization of a specific type of probabilistic model, namely monotone unidimensional latent variable models. The authors discuss the properties and limitations of these models, as well as their applications in various fields such as psychology and education. The paper does not involve any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods, Neural Networks, Theory.   Probabilistic Methods: The paper describes a computational framework for sensorimotor integration that involves estimating the state of the environment and the observer's own state by integrating multiple sources of information. This involves probabilistic methods such as reducing variance in localization by integrating spatial information from visual and auditory systems.  Neural Networks: The paper discusses specific models of integration and adaptation resulting from the computational framework, which involve neural networks that simulate the dynamic behavior of the arm and predict the effects of remapping in the relation between visual and auditory space.  Theory: The paper presents psychophysical results from two sensorimotor systems and analyzes them within the computational framework, providing evidence for the existence of an internal model that simulates the dynamic behavior of the arm and captures the temporal propagation of errors in estimating the hand's state. This demonstrates the theoretical underpinnings of the sensorimotor integration system.
Genetic Algorithms, Neural Networks, Reinforcement Learning.   Genetic Algorithms: The paper proposes an approach of evolving neural networks with genetic algorithms to learn complex general behavior.   Neural Networks: The paper discusses evolving neural networks with genetic algorithms to learn complex general behavior.   Reinforcement Learning: The paper tests the proposed approach in the stochastic, dynamic task of prey capture, which is a form of reinforcement learning.
Genetic Algorithms, Neural Networks, Reinforcement Learning.   Genetic Algorithms: The paper presents the SANE (Symbiotic, Adaptive Neuro-Evolution) method, which is a type of genetic algorithm used to evolve networks capable of playing go on small boards with no pre-programmed go knowledge.   Neural Networks: The SANE method uses neural networks as the basis for the evolved networks that play go.   Reinforcement Learning: The evolved networks were trained using a reinforcement learning approach, where they played against a simple computer opponent and received feedback on their performance.
Genetic Algorithms. This paper belongs to the Genetic Algorithms sub-category of AI. The paper investigates the behavior of the GA on floating representation problems and explores the effects of different types of pressures on GA performance. The paper also discusses the advantages of using the floating representation for the GA.
Theory.   Explanation: The paper discusses a survey of theory and methods of invariant item ordering, and does not mention any specific AI sub-categories such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper analyzes the performance of a genetic algorithm and compares it to a hill-climbing algorithm. It also discusses the features of an idealized genetic algorithm that give it a speedup over the hill-climbing algorithm.   Theory: The paper provides theoretical analysis of the performance of the algorithms and identifies the features that contribute to their speedup. It also discusses how these features can be incorporated into a real genetic algorithm.
Theory.   Explanation: The paper proposes modifications to the parallel variable distribution algorithm and presents a general framework for the analysis of this class of algorithms. It does not involve any specific AI subfield such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper uses a paradigm of statistical mechanics of financial markets (SMFM) to fit multivariate financial markets using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities.   Rule Learning: The canonical momenta derived from the SMFM model are used as technical indicators in a recursive ASA optimization process to tune trading rules. These trading rules are then used on out-of-sample data to demonstrate that they can profit from the SMFM model. The paper emphasizes the utility of blending an intuitive and powerful mathematical-physics formalism to generate indicators which are used by AI-type rule-based models of management.
Neural Networks.   Explanation: The paper compares the computational power of different neural network models, specifically focusing on networks of spiking neurons. The paper does not discuss any other sub-categories of AI such as case-based reasoning, genetic algorithms, probabilistic methods, reinforcement learning, or rule learning.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms are directly compared to Very Fast Simulated Reannealing (VFSR) in the paper. The paper presents a suite of six standard test functions to GA and VFSR codes from previous studies, without any additional fine tuning, to compare their efficiency.   Probabilistic Methods are also present in the paper as both GA and VFSR are stochastic optimization algorithms that use probabilistic methods to search for the optimal solution. VFSR is statistically guaranteed to find the function optima, which is a probabilistic property.
Rule Learning, Theory.   Rule Learning is the most related sub-category as the paper discusses the refinement of rule bases to make them consistent with a set of input training examples. Theory is also relevant as the paper focuses on the problem of theory refinement in machine learning.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper deals with the problem of belief aggregation, which involves probabilistic beliefs of individual agents. The proposed market-based approach involves agents betting on uncertain events, which is a probabilistic method of aggregating beliefs.   Theory: The paper presents a theoretical framework for belief aggregation through a market-based approach. It discusses the properties of the aggregate probability and its relationship with independently motivated techniques. The paper also argues that the proposed approach provides a decision-theoretic foundation for expert weights often used in centralized pooling techniques.
Neural Networks.   Explanation: The paper discusses the application of semilinear predictability minimization to real-world images, and how the system learns to generate distributed representations based on well-known feature detectors, such as orientation-sensitive edge detectors and off-center-on-surround-like structures. This is achieved without a teacher and without significant preprocessing, indicating the use of unsupervised learning. These are all characteristics of neural networks, which are a sub-category of AI that are designed to mimic the structure and function of the human brain.
Reinforcement Learning.   Explanation: The paper is specifically about reinforcement learning with self-modifying policies, and discusses an algorithm (the success-story algorithm) for improving these policies through experience. While other sub-categories of AI may be relevant to reinforcement learning, they are not discussed in this paper.
Reinforcement Learning, Probabilistic Methods.   Reinforcement learning is present in the text as the paper focuses on task sequences that allow for speeding up the learner's average reward intake through appropriate shifts of inductive bias. The paper also mentions traditional reinforcement learning failing in complex, partially observable environments.   Probabilistic methods are present in the text as the success-story algorithm (SSA) uses backtracking to undo those bias shifts that have not been empirically observed to trigger long-term reward accelerations (measured up until the current SSA call). The paper also mentions plugging in a wide variety of learning algorithms, including a novel, adaptive extension of Levin search and a method for embedding the learner's policy modification strategy within the policy itself (incremental self-improvement), which may involve probabilistic methods.
Rule Learning, Theory.   Rule Learning is the most related sub-category as the paper discusses the inductive logic programming system LOPSTER and its extension CRUSTACEAN, which are both rule-based learning systems. The paper compares the performance of these systems in inducing recursive relations from small datasets.   Theory is also a relevant sub-category as the paper presents a hypothesis about the extension of LOPSTER and empirically evaluates its ability to induce recursive relations. The paper also discusses the advantage of basing induction on logical implication rather than subsumption, which is a theoretical concept in AI.
Reinforcement Learning.   Explanation: The paper discusses the TD() algorithm, which is a popular family of algorithms for approximate policy evaluation in large MDPs, and extends it to the Least-Squares TD (LSTD) algorithm, which is a model-based reinforcement learning technique. The paper also discusses the drawbacks of TD() and how LSTD improves upon it. Therefore, the paper primarily belongs to the sub-category of Reinforcement Learning in AI.
Probabilistic Methods.   Explanation: The paper discusses the use of importance sampling, which is a probabilistic method for estimating properties of a target distribution by drawing samples from a different, easier-to-sample distribution. The paper also mentions the use of Markov chain transitions and annealing sequences, which are common techniques in probabilistic modeling and inference.
Genetic Algorithms.   Explanation: The paper discusses the Genetic Programming optimization method, which is a variant of Genetic Algorithms. The focus is on identifying redundancy in GP, which is a key aspect of Genetic Algorithms. The paper does not discuss any other sub-category of AI.
Genetic Algorithms.   Explanation: The paper discusses a metaheuristic approach for graph coloring problems that is based on a population search and uses crossover operators, which are a key component of genetic algorithms. The authors also mention how a methodology inspired by Competitive Analysis can be used to design better crossover operators. While other sub-categories of AI may also be relevant to the problem of graph coloring, such as Probabilistic Methods or Reinforcement Learning, the focus of this paper is on the use of genetic algorithms and their associated techniques.
Rule Learning, Theory.   The paper discusses the concept of decision trees as a method for inductive inference, which falls under the category of rule learning. The paper also presents an algorithm for determining the equivalence of decision trees, which is a theoretical aspect of decision tree learning.
Neural Networks, Theory.   Neural Networks: The paper proposes to use masks derived from synaptic weight patterns, which are a key component of neural networks, to assess the relevance of theories of synaptic modification as models of feature extraction in human vision.   Theory: The paper aims to assess the relevance of theories of synaptic modification as models of feature extraction in human vision, and compares two different methods of feature extraction (PCA and BCM) to test their effectiveness in reducing the generalization performance of human subjects.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of PATHINT, a non-Monte-Carlo path-integral algorithm, to embed the Duffing oscillator model in moderate noise. This algorithm is specifically designed to handle nonlinear Fokker-Planck systems, which are probabilistic in nature.  Theory: The paper presents a two-dimensional time-dependent Duffing oscillator model of macroscopic neocortex and investigates whether chaos in neocortex can survive in noisy contexts. The paper also discusses the use of PATHINT, a theoretical approach, to embed the model in noise.
Neural Networks.   Explanation: The paper discusses an optimization scheme for neural networks, specifically for pruning weights to improve generalization. The implementation of the scheme involves extending existing neural network optimization algorithms (OBD and OBS). There is no mention of any other sub-category of AI in the text.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper uses genetic programming to evolve board evaluation functions. Genetic programming is a type of genetic algorithm that evolves computer programs to solve a specific problem. In this case, the problem is to create a board evaluation function that can evaluate the strength of a given chess position.   Reinforcement Learning: The paper uses reinforcement learning to train the evolved board evaluation functions. Reinforcement learning is a type of machine learning where an agent learns to make decisions by interacting with an environment and receiving feedback in the form of rewards or punishments. In this case, the agent is the board evaluation function and the environment is the chess game. The function receives a reward or punishment based on how well it evaluates the strength of a given chess position.
Case Based, Reinforcement Learning  Explanation:   - Case Based: The Q2 algorithm is based on instance-based learning, which is a subfield of case-based reasoning. The paper mentions "conventional instance-based approaches to learning" and describes how Q2 defines a neighborhood for performing experiments. - Reinforcement Learning: The paper mentions "evolutionary methods" as one of the existing approaches to optimizing noisy continuous functions, which is a type of reinforcement learning. While Q2 is not explicitly described as a reinforcement learning algorithm, it does involve an iterative process of selecting experiments and updating a model based on the results, which is a common characteristic of reinforcement learning algorithms.
Genetic Algorithms, Rule Learning.   Genetic algorithms are mentioned as the constructive induction engine used in the proposed approach. The paper describes how the iterative modification of input data space is performed using genetic algorithms.   Rule learning is also present in the paper as the final classification is obtained by a weighted majority voting rule, according to the n 2 - classifier approach. The paper also discusses the subspaces of attributes dedicated for optimal discrimination of appropriate pairs of classes, which can be seen as rules for classification.
Probabilistic Methods.   The paper discusses the development of a theory of the statistical mechanics of combat (SMC) using modern methods of statistical mechanics, which is a probabilistic approach to modeling complex systems. The paper also mentions the use of Very Fast Simulated Re-Annealing (VFSR), a probabilistic optimization algorithm, for fitting models to empirical data.
Theory. The paper primarily discusses theoretical concepts and paradigms related to the study of neocortical interactions, including mathematical physics and statistical mechanics. The authors critique other studies that make unsupported claims about chaos and quantum physics, and highlight the importance of sound theory and reproducible experiments in understanding neocortical function. The paper does not discuss any specific AI techniques or applications.
Probabilistic Methods.   Explanation: The paper applies statistical mechanics methodology to term-structure bond-pricing models, which involves the use of probabilistic methods to model the behavior of the bond market. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms: The paper focuses on evaluating and improving steady state evolutionary algorithms (EA) on constraint satisfaction problems (CSPs). EAs are a type of genetic algorithm that mimics the process of natural selection to evolve solutions to problems. The paper discusses various modifications to the standard EA, such as the use of different selection and mutation operators, and the impact of these modifications on the algorithm's performance.  Probabilistic Methods: The paper also discusses the use of probabilistic methods, such as the use of probability distributions to guide the search process, in improving the performance of EAs on CSPs. The authors propose a new probabilistic algorithm called the Probabilistic Local Search with Restart (PLSR) and compare its performance to other existing algorithms.
Genetic Algorithms.   Explanation: The paper presents empirical results on the effectiveness of incremental evolution for genetic programming, which is a subfield of evolutionary computation that uses genetic algorithms to evolve solutions to problems. The paper does not discuss any other sub-categories of AI.
Neural Networks, Theory.   Neural Networks: The paper discusses the role of neural networks in the retina and how they process spatiotemporal information. It also mentions specific types of cells in the retina that function as neural networks.  Theory: The paper proposes a unified theory of spatiotemporal processing in the retina, which involves synthesizing existing theories and experimental findings. It also discusses the limitations of current theories and suggests future directions for research.
Genetic Algorithms. This paper belongs to the sub-category of Genetic Algorithms. The paper studies the performance of six algorithms in NK-landscapes with low and high dimension while keeping the amount of epistatic interactions constant. The algorithms studied include standard genetic algorithms employing crossover or mutation, and genetic local search algorithms. The paper evaluates the performance of these algorithms in high-dimensional landscapes, indicating the importance of considering high-dimensional landscapes for evaluating the performance of evolutionary algorithms.
Theory. This paper belongs to the sub-category of AI called Theory. It presents a reconstruction of theories of rational belief revision according to an economic standard of rationality, which involves using preferences to select among alternative possible revisions. The paper also examines formally how different limitations on rationality affect belief revision. There is no mention of any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Probabilistic Methods.   Explanation: The paper discusses the use of a contrast function based on higher-order cumulants for the estimation of Independent Component Analysis (ICA), which is a statistical signal processing technique. The paper also introduces a fixed-point iteration scheme for finding the relevant extrema of the contrast function. Both of these techniques are probabilistic methods commonly used in signal processing and machine learning.
Probabilistic Methods.   Explanation: The paper presents a methodology for representing probabilistic relationships using a Bayesian network and demonstrates how it can be mapped to a market price system. The focus is on using probabilistic methods to model uncertainty and make inferences. Other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, and Rule Learning are not directly relevant to the paper's content.
Probabilistic Methods.   The paper discusses the use of probabilistic models for change point and change curve modeling in stochastic processes and spatial statistics. The author uses Bayesian methods and Markov chain Monte Carlo (MCMC) algorithms to estimate parameters and make predictions. The paper also discusses the use of mixture models and model selection criteria based on likelihood and Bayesian information criteria. Overall, the paper focuses on probabilistic modeling and inference methods for analyzing data.
Case Based, Constraint Reasoning  Explanation:  - Case Based: The paper mentions that the CHARADE platform integrates case-based reasoning as part of its problem-solving architecture. - Constraint Reasoning: The paper also mentions that the CHARADE platform uses constraint reasoning as part of its problem-solving architecture.
Case Based, Constraint Reasoning, Planning.   Case Based: The paper proposes an approach based on the integration of skeletal planning and case based reasoning techniques with constraint reasoning.   Constraint Reasoning: The paper proposes the use of temporal constraints in two steps of the planning process: plan fitting and adaptation, and resource scheduling.   Planning: The paper discusses the complexity of defining a planning approach for the domain of forest fire fighting and proposes an approach based on the integration of different planning techniques. The development of the system software architecture with a OOD methodology is also mentioned.
Neural Networks, Theory.   Neural Networks: The paper discusses the implementation of back prop algorithms on T0, a vector processor designed for neural network simulation. It also mentions the use of Matrix Back Prop, a matrix formulation of back prop that has been shown to be efficient on some RISCs.   Theory: The paper discusses the efficient implementation of back prop algorithms on T0 and the use of a mixture of fixed and floating point operations for good convergence. It also mentions the asymptotically optimal performance achieved using Matrix Back Prop.
Genetic Algorithms.   Explanation: The paper describes the use of genetic operations in the GGE software to generate novel 3D forms for architects. This is a clear indication that the paper belongs to the sub-category of AI known as Genetic Algorithms.
Rule Learning, Theory.   Explanation: The paper discusses the Set Enumeration (SE) tree as a generalization of decision trees, which is a type of rule learning algorithm. The paper also empirically characterizes domains in which SE-trees are advantageous, which falls under the category of theory in AI research. The other sub-categories of AI (Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning) are not directly relevant to the content of this paper.
Rule Learning.   Explanation: The paper discusses the approach of learning classification rules from data using two modules, LINNEO + and GAR. LINNEO + is a knowledge acquisition tool that automatically generates classes from examples using an unsupervised strategy, while GAR is used to generate a set of classification rules for the original training set. The paper presents an application of these techniques to data obtained from a real wastewater treatment plant in order to help the construction of a rule base. Therefore, the paper primarily focuses on the process of learning rules from data, making it a Rule Learning sub-category of AI.
This paper belongs to the sub-category of AI called Neural Networks. Neural networks are present in the text as the authors discuss the role of neural circuits in sensorimotor integration and how they can be modeled using artificial neural networks. The authors also discuss how neural networks can be used to simulate and study sensorimotor integration in various contexts.
Neural Networks, Probabilistic Methods.   Neural Networks are mentioned in the abstract as being very good at modeling on-line disambiguation behavior. The paper also discusses how a subsymbolic neural network can be combined with high-level control to process novel combinations of relative clauses systematically.   Probabilistic Methods are also mentioned in the abstract as being used to dynamically combine the strengths of association between keywords and senses to form the most likely interpretation. The paper also discusses how semantic constraints are at work in both the disambiguation task and the processing of embedded clauses.
Neural Networks, Exemplar-based Generalization  Explanation:  The paper investigates the generalization capabilities of backpropagation learning in feed-forward and recurrent feed-forward connectionist networks, which are types of neural networks. Additionally, the paper compares the results to an exemplar-based generalization scheme, which is a type of case-based reasoning. Therefore, the paper belongs to the sub-categories of Neural Networks and Case Based.
Neural Networks.   Explanation: The paper presents a framework for incorporating pruning strategies in the MTiling constructive neural network learning algorithm. The focus is on reducing the network size without compromising its generalization performance. The paper describes three sensitivity-based strategies for pruning neurons. All of these are related to neural networks, which are a sub-category of AI.
Neural Networks, Theory.   Neural Networks: The paper discusses various neural learning rules for Independent Component Analysis (ICA) and proposes a simple Hebbian or anti-Hebbian learning rule for ICA. The paper also mentions the use of non-linear functions in the learning rule, which is a common feature of neural networks.  Theory: The paper presents a theoretical analysis of the ICA problem and shows that it can be solved by simple Hebbian or anti-Hebbian learning rules. The paper also discusses the relationship between the learning rule and information-theoretic quantities, which is a theoretical aspect of the problem.
Neural Networks.   Explanation: The paper specifically focuses on different types of constructive algorithms for training feed-forward neural networks, and discusses their convergence properties and effectiveness in solving problems. The paper does not mention any other sub-categories of AI such as case-based reasoning, genetic algorithms, probabilistic methods, reinforcement learning, or rule learning.
Reinforcement Learning, Probabilistic Methods.   Reinforcement learning is the main focus of the paper, as the authors introduce a new method for prioritized sweeping, a model-based reinforcement learning technique.   Probabilistic methods are also relevant, as the authors apply their method to generalized model approximators, such as Bayesian networks, which rely heavily on probabilistic reasoning.
This paper belongs to the sub-category of AI called Case Based.   Explanation:  The paper discusses the selection of distance metrics and feature subsets for k-Nearest Neighbor classifiers. This is a case-based approach to classification, where new instances are classified based on their similarity to previously observed instances. The paper explores different distance metrics and feature subsets to improve the accuracy of the k-NN classifier. Therefore, the paper is focused on the use of past cases to inform future decisions, which is the essence of case-based reasoning.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper focuses on developing constructive neural network learning algorithms for multi-category real-valued pattern classification. The authors discuss the architecture of the neural network and the learning algorithm used to train it.   Probabilistic Methods: The authors also discuss the use of probabilistic methods in the learning algorithm, specifically the use of Bayesian inference to estimate the posterior distribution of the model parameters. They also mention the use of a Gaussian mixture model to model the class-conditional densities.
Rule Learning, Theory.   The paper discusses the difficulties of learning logic programs, which falls under the sub-category of Rule Learning. The paper also presents a theoretical analysis of the problem and the limitations of current induction techniques, which falls under the sub-category of Theory.
Neural Networks.   Explanation: The paper describes a neural network architecture, specifically a multi-layer perceptron (MLP) with gamma filters and gain terms. The paper also compares this architecture with other neural network architectures and a local approximation scheme.
Neural Networks.   Explanation: The paper introduces neural one-unit learning rules for the problem of Independent Component Analysis (ICA) and blind source separation. The learning rules use simple constrained Hebbian/anti-Hebbian learning, which is a common technique in neural network learning. The paper also introduces a novel computationally efficient fixed-point algorithm to speed up the convergence of the stochastic gradient descent rules. Therefore, this paper belongs to the sub-category of Neural Networks in AI.
Neural Networks.   Explanation: The paper discusses a collection of papers on connectionism, which is a subfield of AI that focuses on neural networks. The book is a compilation of research papers by graduate students who participated in a summer school on connectionism. The paper also mentions previous summer schools on connectionism and a future one scheduled to be held in 1993. All of these indicate a focus on neural networks and connectionism.
Theory. The paper is specifically about the task of theory revision in knowledge-based systems, and discusses the computational complexity of this task.
Rule Learning, Theory.   The paper discusses a method for constructing conjunctions as new attributes for decision tree learning, which involves searching for conditions (attribute-value pairs) from paths to form new attributes. This method is compared to other hypothesis-driven new attribute construction methods, and the new idea is that it carries out systematic search with pruning over each path of a tree to select conditions for generating a conjunction. Therefore, conditions for constructing new attributes are dynamically decided during search. This approach falls under the sub-category of Rule Learning.   The paper also evaluates the performance of the method in terms of both higher prediction accuracy and lower theory complexity, which falls under the sub-category of Theory.
Neural Networks  Explanation: The paper focuses on the problems of standard recurrent neural networks with long time lags between relevant signals and proposes an alternative method of random weight guessing to solve these problems. The paper does not mention any other sub-categories of AI.
Genetic Algorithms.   Explanation: The paper focuses on a class of quadratic systems that are widely used as a model in population genetics and also in genetic algorithms. The systems describe a process where random matings occur between parental chromosomes via a mechanism known as "crossover": i.e., children inherit pieces of genetic material from different parents according to some random rule. The paper develops a general technique for computing the expected value of the number of generations required for a population to reach a certain state, which is a fundamental problem in genetic algorithms. Therefore, the paper belongs to the sub-category of Genetic Algorithms in AI.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses Maximum Likelihood Estimations (MLE) methods, which are probabilistic methods used to construct phylogenies based on DNA data. The paper also introduces a metric on stochastic process models of evolution and presents a simple and efficient algorithm for inverting the stochastic process of evolution.  Theory: The paper presents a result on the PAC-learnability of the class of distributions produced by tree-like processes and establishes a lower-bound convergence rate for the algorithm. The paper also discusses the computational intractability of MLE methods and presents the first polynomial-time algorithm that is guaranteed to converge to the correct tree.
Reinforcement Learning.   Explanation: The paper presents an extension to Q-learning, which is a type of reinforcement learning algorithm. The focus of the paper is on developing learning techniques for delayed reward problems in continuous domains, which is a key area of research in reinforcement learning. The paper discusses how the Q-learning algorithm is adapted to work with real-valued states and actions, which is a common challenge in reinforcement learning for real-world applications such as robotics. Therefore, reinforcement learning is the most related sub-category of AI to this paper.
Probabilistic Methods.   Explanation: The paper deals with stochastic smoothing/filtering and estimation with incomplete data, which are probabilistic methods. The paper proposes a martingale approach for estimation and convergence with incomplete data, which is also a probabilistic method. The paper does not involve case-based reasoning, genetic algorithms, neural networks, reinforcement learning, or rule learning.
Case Based, Inductive Machine Learning.   Case-based reasoning is the main focus of the paper, as it is used to improve problem handling in customer support. The paper also discusses the challenges of building and maintaining a case base, which is a key aspect of case-based reasoning.   Inductive machine learning is also discussed as a way to automatically extract knowledge from raw data and continually acquire and revise knowledge. The paper suggests combining inductive machine learning with case-based reasoning to create an intelligent system for customer support.
Genetic Algorithms.   Explanation: The paper describes the use of the Genetic Programming (GP) algorithm, which is a type of genetic algorithm, to solve a difficult problem with a large set of training cases. The paper then proposes three subset selection methods to reduce the number of function-tree evaluations needed during the GP algorithm. The paper primarily focuses on the use of genetic programming and its optimization techniques, making it most closely related to the Genetic Algorithms sub-category of AI.
Genetic Algorithms, Rule Learning.   Genetic Algorithms: The paper presents a modification to the standard supervised learning approach in Genetic Programming (GP), which involves altering the fitness score of an individual based on how many cases remain uncovered in the training set after the individual exceeds an error limit. This modification is referred to as Limited Error Fitness (LEF).   Rule Learning: The LEF approach involves dynamically altering the training set order and the error limit in response to the performance of the fittest individual in the previous generation. This can be seen as a form of rule learning, where the algorithm is learning to adjust its parameters based on the observed performance of the system.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper discusses decision tree classifiers and different methods for pruning them.   Probabilistic Methods are also present in the text as the paper discusses the use of probability distributions at the leaves of the decision tree and the Laplace correction to estimate these distributions.
Genetic Algorithms, Theory.   Explanation:  1. Genetic Algorithms: The paper primarily discusses the use of genetic algorithms and tournament selection in solving optimization problems. It also explores the effects of noise on the performance of these algorithms.  2. Theory: The paper presents theoretical analysis and experimental results to support its claims about the effectiveness of tournament selection and the impact of noise on genetic algorithms. It also discusses the implications of these findings for future research in the field.
Theory  Explanation: The paper focuses on the practical implementation of the Fourier transform algorithm and its usefulness as a practical algorithm, but the main focus is on its theoretical foundations and its role in proving important learnability results. The paper does not discuss any specific application of AI or any specific AI technique, but rather focuses on the theoretical and algorithmic aspects of the Fourier transform. Therefore, the paper belongs to the Theory sub-category of AI.
Genetic Algorithms.   Explanation: The paper discusses the use of Genetic Programming (GP) and compares the performance of different variations of GP on classification problems. The focus is on the use of small populations in GP, which is a key aspect of Genetic Algorithms. The paper does not discuss any other sub-category of AI.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the authors use a rule-learning program to uncover indicators of fraudulent behavior from a large database of customer transactions. These indicators are then used to create a set of monitors, which profile legitimate customer behavior and indicate anomalies.   Probabilistic Methods are present in the text as the outputs of the monitors are used as features in a system that learns to combine evidence to generate high-confidence alarms. This system is able to adapt to the changing conditions typical of fraud detection environments.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper describes a method for constructing a Bayesian network to model the underlying joint probability distribution of a set of discrete random variables. The approach is based on clustering the samples and using Bayesian prototype vectors to represent the conditional probabilities of each cluster. The likelihood of the Bayesian tree is also evaluated.   Neural Networks: The Bayesian network constructed in the paper can be realized as a feedforward neural network capable of probabilistic reasoning. The paper describes the process of learning in this framework, which involves choosing the size of the prototype set, partitioning the samples into clusters, and constructing the cluster prototypes. The paper also presents a greedy heuristic for searching through the space of different partition schemes with different numbers of clusters to find an optimal approximation of the probability distribution.
Genetic Algorithms.   Explanation: The paper introduces two new crossover operators for Genetic Programming (GP), which is a subfield of Genetic Algorithms. The paper does not mention any other sub-categories of AI.
Genetic Algorithms.   Explanation: The paper discusses an extension of the genetic programming (GP) paradigm called Adaptive Representation through Learning (ARL). The ARL algorithm uses genetic operations to discover and modify subroutines, which act as building blocks to accelerate the evolution of good representations for learning from observation and interaction with an environment. Therefore, the paper belongs to the sub-category of AI known as Genetic Algorithms.
Reinforcement Learning, Probabilistic Methods, Theory.   Reinforcement learning is the main focus of the paper, as it investigates learning control architectures for embedded agents in partially observable Markovian decision processes (POMDPs).   Probabilistic methods are also relevant, as POMDPs are a type of probabilistic decision process. The paper explores the use of stochastic policies in the search space and defines the value or utility of a distribution over states.   Finally, the paper also falls under the category of Theory, as it presents new frameworks and results for learning in POMDPs without resorting to state estimation. It challenges the conventional discounted RL framework and proposes a new approach.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods: The paper describes a dynamic belief network model for fall diagnosis, which is a probabilistic method that uses evidence from sensor observations to output beliefs about the current walking status and make predictions regarding future falls. The model represents possible sensor error and is parametrised to allow customisation to the individual being monitored.  Rule Learning: The model is designed to represent possible sensor error and is parametrised to allow customisation to the individual being monitored. This involves learning rules from the sensor observations and using them to make predictions about future falls.
The paper does not belong to any of the sub-categories of AI listed. It is focused on goal-based explanation evaluation and does not involve any of the specific AI techniques mentioned.
Reinforcement Learning.   Explanation: The paper discusses two methods for organizing temporal behaviors in reinforcement environments, which is a key aspect of reinforcement learning. The paper also mentions the use of gradient descent during learning, which is a common technique in reinforcement learning.
Reinforcement Learning.   Explanation: The paper introduces the "incremental self-improvement paradigm" which is a form of reinforcement learning. The system is designed to improve the way it learns and improves, and it uses a reward-based approach to determine which modifications to keep. The paper also mentions that the system uses a Turing machine equivalent programming language, which is a common tool in reinforcement learning research. Therefore, reinforcement learning is the most related sub-category of AI in this paper.
Neural Networks.   Explanation: The paper explores a neural network model for efficient implementation of a database query system. The application of the proposed model to a high-speed library query system for retrieval of multiple items is based on partial match of the specified query criteria with the stored records. The performance of the ANN realization of the database query module is analyzed and compared with other techniques commonly in current computer systems. The results of this analysis suggest that the proposed ANN design offers an attractive approach for the realization of query modules in large database and knowledge base systems, especially for retrieval based on partial matches. There is no mention of any other sub-categories of AI in the text.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes a neural network architecture for syntax analysis, which is the main focus of the paper. The authors explain how the neural network is designed and trained to perform this task.   Probabilistic Methods: The authors also discuss the use of probabilistic methods in their approach, specifically the use of conditional random fields (CRFs) to model the dependencies between different parts of speech in a sentence. They explain how the CRF is integrated into the neural network architecture to improve its performance.
Theory. The paper belongs to the sub-category of Theory in AI. The paper discusses the theoretical limitations of identification strategies for classes with and without approximate fingerprints using Equivalence queries. It does not involve any practical implementation or application of AI techniques such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Probabilistic Methods, Theory  Probabilistic Methods: The paper proposes a scoring measure for ranking branch instructions based on profile information, which is used to schedule instructions within a super block. This ranking scheme is designed to minimize the expected completion time of the region, which is a probabilistic objective.   Theory: The paper presents a theoretical analysis of a simplified abstract model of the problem of profile-driven scheduling over any acyclic code region, which yields the scoring measure for ranking branch instructions. The paper also discusses the computational complexity of the ranking scheme and its practicality for super blocks.
Genetic Algorithms.   Explanation: The paper proposes an extension to the Genetic Programming paradigm, which is a sub-category of Genetic Algorithms. The paper introduces mechanisms like transcription, editing, and repairing into Genetic Programming to evolve computer programs. The feasibility of the approach is demonstrated by using it to develop programs for the prediction of sequences of integer numbers. Therefore, the paper is primarily related to Genetic Algorithms.
Neural Networks.   Explanation: The paper discusses the use of the back-propagation algorithm, which is a widely used procedure for training multi-layer feed-forward networks of sigmoid units. The paper proposes a new approach to minimizing the error in these networks, which is demonstrated on a number of data-sets widely studied in the machine learning community. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods.   Explanation: The paper discusses various sequential simulation-based methods for Bayesian filtering, which are probabilistic methods used for estimating the state of a system based on noisy measurements. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Theory.   Explanation: The paper presents a theoretical approach to the problem of comparing evolutionary trees, without using any specific AI techniques such as neural networks or genetic algorithms. The authors propose a mathematical framework for representing and comparing trees, and analyze the computational complexity of their algorithm. Therefore, the paper belongs to the sub-category of AI theory.
Case Based, Neural Networks  Explanation:  - Case Based: The paper is about Case-Based Reasoning (CBR), which is a subfield of AI that involves solving new problems by adapting solutions from similar past problems (i.e., cases). The paper specifically focuses on the subtask of case retrieval, which involves efficiently finding relevant cases from a large case base.  - Neural Networks: The proposed memory model, Case Retrieval Nets (CRNs), is based on a net-like structure and uses a spreading activation process to retrieve similar cases. This is a common approach in neural network models, which are a subfield of AI that involves building models inspired by the structure and function of the human brain.   Therefore, the paper belongs to both the Case Based and Neural Networks subcategories of AI.
Case Based, Rule Learning  Explanation:  - Case Based: The paper presents a memory model for Case-Based Reasoning, which involves storing and retrieving cases (observed symptoms and diagnoses) to solve new problems.  - Rule Learning: The paper mentions an "object model encoding knowledge" about the devices in the application domain, which can be seen as a set of rules for reasoning about the domain.
Probabilistic Methods.   Explanation: The paper discusses the use of Gibbs sampling, which is a probabilistic method commonly used in Bayesian inference. The authors also mention the use of prior distributions, which is a key aspect of Bayesian statistics. There is no mention of any other sub-category of AI in the text.
Probabilistic Methods.   Explanation: The paper discusses the use of Gaussian processes as a way of defining prior distributions over functions, and how these can be used for nonparametric regression and classification problems. The approach is Bayesian, with hyperparameters being sampled using Markov chain methods, and the models are defined in a probabilistic way that allows for the discovery of high-level properties of the data.
Rule Learning.   Explanation: The paper is specifically focused on the problem of generating rule sets, which falls under the sub-category of AI known as Rule Learning. The paper discusses the complexity of generating the simplest rule set, which is a key aspect of rule learning algorithms. While the paper does touch on generalization algorithms and complexity measures, these are all within the context of rule learning.
Reinforcement Learning.   Explanation: The paper describes a new average-reward algorithm called SMART for finding gain-optimal policies in continuous time semi-Markov decision processes. The algorithm is a form of reinforcement learning, which is a sub-category of AI that involves learning through trial-and-error interactions with an environment to maximize a reward signal. The paper also integrates the reinforcement learning algorithm directly into two commercial discrete-event simulation packages, ARENA and CSIM, paving the way for this approach to be applied to many other factory optimization problems for which there already exist simulation models.
Case Based, Theory  Explanation:   This paper belongs to the sub-category of Case Based AI because it discusses a popular instance-based algorithm, Locally Weighted Polynomial Regression (LWPR), for learning continuous non-linear mappings. The paper proposes a new algorithm for making fast predictions with arbitrary local weighting functions, arbitrary kernel widths, and arbitrary queries. The paper also discusses an approximation that achieves up to a two-orders-of-magnitude speedup with negligible accuracy losses. These are all characteristics of Case Based AI, which involves solving new problems by adapting solutions to similar past problems.  This paper also belongs to the sub-category of Theory because it discusses the drawbacks of previous approaches to dealing with the computational expense of LWPR predictions and proposes a new algorithm based on a multiresolution search of a quickly-constructible augmented kd-tree. The paper also discusses potential extensions for tractable instance-based learning on datasets that are too large to fit in a computer's main memory. These are all characteristics of Theory in AI, which involves developing new algorithms and models for solving problems.
Theory. The paper presents theoretical results and conditions for inferring evolutionary trees from ordinal matrices. There is no mention or application of any specific AI sub-category such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Rule Learning, Theory.   The paper discusses the use of Xof-N attributes for constructive induction, which is a subfield of rule learning. The paper also explores the characteristics and performance of different types of Xof-N attributes, which is a theoretical analysis.
Rule Learning, Theory.   The paper belongs to the sub-category of Rule Learning because it studies the effects of different types of new attribute on decision tree learning, which is a type of rule learning. It also belongs to the sub-category of Theory because it investigates the theoretical complexity and representation power of different types of new attribute.
Case Based, Theory  Explanation:  - Case Based: The paper discusses analogy and case-based reasoning systems and compares two methods for retrieving analogues from a large knowledge base. - Theory: The paper presents a theoretical model of retrieval time based on problem characteristics and conducts experiments to test the model's predictions.
Probabilistic Methods, Rule Learning  The paper belongs to the sub-category of Probabilistic Methods because it uses compression-based induction, which is a probabilistic method for classification. The paper also belongs to the sub-category of Rule Learning because it uses decision trees to classify DNA sequences based on their compressed representations. The decision trees are learned from the training data using a rule learning algorithm.
Neural Networks, Theory.   Neural Networks: The paper describes a computational model that simulates the behavior of the hippocampal system, which is a neural network in the brain. The model uses long-term potentiation and long-term depression to encode memories, which are processes that involve changes in the strength of synaptic connections between neurons.  Theory: The paper presents a theoretical framework for understanding how the hippocampal system may rapidly transform transient patterns of activity into persistent structural encodings. The model is based on the idea that the HS uses Hebbian learning to strengthen connections between neurons that are active at the same time, and weaken connections between neurons that are not. The paper also discusses the broader implications of the model for understanding memory formation and retrieval in the brain.
Neural Networks.   Explanation: The paper compares two techniques for lighting control, one of which uses a neural network to approximate the mapping between sensor readings and device intensity levels. The other technique is a conventional feedback control loop. The paper concludes that the neural network approach appears superior. The focus of the paper is on the use of neural networks for control, making it most closely related to the sub-category of Neural Networks within AI.
Probabilistic Methods.   Explanation: The paper discusses a general class of alternating estimation-maximization (EM) type continuous-parameter estimation algorithms, which are commonly used in probabilistic modeling and inference. The paper provides a sufficient condition for convergence of these algorithms with respect to a given norm, and applies the results to the specific problem of estimating Poisson rate parameters in emission tomography. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Neural Networks, Rule Learning.   Neural Networks: The paper presents a method that uses neural networks to refine the knowledge of a PID controller. The Manncon algorithm is used to determine the topology and initial weights of the network, which is further trained using backpropagation.   Rule Learning: The Kbann approach, which is the basis for the method presented in this paper, uses neural networks to refine knowledge that can be written in the form of simple propositional rules. The Manncon algorithm extends this idea further by using the mathematical equations governing a PID controller to determine the topology and initial weights of the network.
Probabilistic Methods.   Explanation: The paper discusses the use of Markov chains to simulate random field models in image analysis and spatial statistics, which is a probabilistic method. The symmetrizations of the empirical estimator described in the paper are also based on probabilistic methods, specifically the idea behind generalized von Mises statistics.
Neural Networks, Theory.   Neural Networks: The paper describes a biologically motivated model of neuronal plasticity (Bienenstock et al., 1982) that forms the basis of the feature extraction method proposed by Intrator (1990). The method has been applied to feature extraction in the context of recognizing 3D objects from single 2D views (Intrator and Gold, 1991).  Theory: The paper discusses the relevance of the extracted features to the theory and psychophysics of object recognition. It also mentions recent statistical theory (Huber, 1985; Friedman, 1987) that is related to the feature extraction method proposed by Intrator (1990).
Genetic Algorithms.   Explanation: The paper discusses the use of Genetic Algorithms (GAs) in solving problems based on the Building-Block Hypothesis, which involves decomposing problems into sub-solutions and assembling them. The paper proposes a model of hierarchical interdependency between building-blocks that can be applied consistently through multiple levels, and presents empirical results of GAs on a canonical example of this model. The paper does not discuss any other sub-category of AI.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper mentions the back-propagation algorithm for neural net learning, which is an application that runs about 6 times faster on the MUSIC system than on a CRAY Y-MP and 2 times faster than on a NEC SX-3. This indicates that the MUSIC system is designed to support neural network processing.  Reinforcement Learning: The paper does not explicitly mention reinforcement learning, but it does mention that the MUSIC system is based on digital signal processors (DSP) and is designed to support parallel distributed memory architecture. This suggests that the system is designed to support a wide range of applications, including those that require reinforcement learning.
Neural Networks. This paper belongs to the sub-category of Neural Networks as it reports a Monte Carlo study of the dynamics of large untrained, feedforward, neural networks with randomly chosen weights and feedback. The analysis consists of looking at the percent of the systems that exhibit chaos, the distribution of largest Lyapunov exponents, and the distribution of correlation dimensions. The paper explores the behavior of artificial neural networks with random weights and feedback, which is a key area of research in the field of neural networks.
Neural Networks, Probabilistic Methods, Theory.   Neural Networks: The paper introduces a model for analog computation with discrete time that covers noisy analog neural nets and networks of spiking neurons.  Probabilistic Methods: The paper discusses the effect of analog noise on analog computations, which is a probabilistic phenomenon.  Theory: The paper presents a new type of upper bound for the power of analog computational models in the presence of analog noise, which is a theoretical result.
Probabilistic Methods.   Explanation: The paper adopts a Bayesian set-up and develops a hybrid Gibbs sampling estimation procedure, which are both probabilistic methods. The autologistic model with covariates is also a probabilistic model that estimates the probability of a binary response based on the values of covariates and the spatial correlation of the responses.
Rule Learning, Probabilistic Methods  Explanation:   The paper is primarily focused on the task of learning high utility rules, which falls under the category of rule learning. The authors propose a new algorithm that incorporates search control guidance to improve the efficiency and effectiveness of the rule learning process. This algorithm is based on probabilistic methods, as it uses a probabilistic model to estimate the utility of candidate rules and guide the search towards more promising areas of the search space.   While other sub-categories of AI may also be relevant to this paper (e.g. reinforcement learning could be used to optimize the search control guidance), they are not as directly related as rule learning and probabilistic methods.
Genetic Algorithms, Rule Learning.   Genetic Algorithms: The paper presents a method for evolving deterministic finite automata using genetic programming. The process involves creating a population of candidate programs, selecting the fittest individuals, and breeding them to create the next generation. This is a classic example of genetic algorithms.  Rule Learning: The evolved programs are represented using cellular encoding, which is a form of rule-based representation. The paper discusses how the evolved programs can be interpreted as sets of rules that govern the behavior of the automata. This is an example of rule learning, where the system learns a set of rules that govern its behavior.
Neural Networks.   Explanation: The paper is specifically about a system tool designed to aid in the specification, construction, and simulation of connectionist networks, which are a type of neural network. The paper does not discuss any other sub-category of AI.
Theory.   Explanation: The paper is focused on exploring the relationship between distributed group-behaviour and the behavioural complexity of individuals. It does not involve the application of any specific AI sub-category such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning. Instead, it presents a theoretical framework for understanding the relationship between group behaviour and individual behaviour. Therefore, the paper belongs to the Theory sub-category of AI.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper proposes a Monte Carlo strategy for checking the consistency of a program with a set of integrity constraints. This involves random generation of queries to the program, which is a probabilistic approach.   Rule Learning: The paper discusses the use of integrity constraints as a replacement for negative examples in ILP. Integrity constraints are first-order clauses that can play the role of negative examples in an inductive process. The proposed algorithm allows the use of integrity constraints instead of (or together with) negative examples, which can lead to more accurate definitions. This is an example of rule learning.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses the use of probabilistic models in AI, specifically in the context of decision-making and uncertainty. It mentions the use of Bayesian networks and Markov decision processes, which are both examples of probabilistic methods.  Reinforcement Learning: The paper also discusses reinforcement learning, which is a type of machine learning that involves an agent learning to make decisions based on feedback from its environment. The paper mentions the use of reinforcement learning in robotics and game playing, and discusses some of the challenges associated with this approach.
Genetic Algorithms. This paper belongs to the sub-category of Genetic Algorithms. The paper explicitly mentions the use of a "standardly specified genetic algorithm" to evolve trade strategies. The genetic algorithm is used to evolve trade strategies that are then implemented in the trade network game.
Neural Networks.   Explanation: The paper presents a method for identifying the ancestor of a hadron jet using a neural network approach. The method involves using a network of sigmoidal functions and a gradient descent procedure to map observed hadronic kinematical variables to the quark/gluon identity. The errors are back-propagated through the network to improve the accuracy of the mapping. The paper also discusses how the neural network method can be used to identify heavy quarks and disentangle different hadronization schemes. There is no mention of any other sub-categories of AI in the text.
Neural Networks.   Explanation: The paper explicitly mentions the use of a neural network classifier to separate gluon from quark jets. The other sub-categories of AI (Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, Theory) are not mentioned or implied in the text.
Neural Networks.   Explanation: The paper focuses on self-organizing neural networks and their applications in hadronic jet physics. The other sub-categories of AI mentioned in the question (Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, Theory) are not directly relevant to the content of the paper.
Neural Networks, Probabilistic Methods, Theory.   Neural Networks: The paper discusses the use of Langevin updating in multilayer perceptrons, which are a type of neural network.   Probabilistic Methods: The Langevin updating rule involves adding noise to the weights during learning, which can be seen as a probabilistic method.   Theory: The paper presents theoretical results showing that Langevin updating can improve learning on problems with initially ill-conditioned Hessians, which is important for multilayer perceptrons with many hidden layers.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the approximation of the distribution of n independent multivalued random variables, which is a probabilistic concept.   Theory: The paper presents improved upper bounds for a class of problems related to PAC learning and pseudorandomness, which is a theoretical concept.
Neural Networks, Case Based   Explanation:   Neural Networks: The paper discusses the use of a modified version of the multiple task learning (MTL) neural network method for functional transfer of knowledge.   Case Based: The paper applies the MTL network to a diagnostic domain of four levels of coronary artery disease, demonstrating the ability to develop a predictive model for one level of disease with superior diagnostic ability over models produced by either single task learning or standard multiple task learning. This involves the use of case-based reasoning to diagnose the disease.
Genetic Algorithms.   Explanation: The paper proposes the use of genetic algorithms for path planning and trajectory planning of an autonomous mobile robot. The entire paper is focused on the development and application of genetic algorithms for motion planning, and there is no mention of any other sub-category of AI. Therefore, genetic algorithms are the most related sub-category of AI to this paper.
Neural Networks, Theory.   Neural Networks: The paper discusses the Vapnik-Chervonenkis dimension of recurrent neural networks, which are a type of neural network.  Theory: The paper provides lower and upper bounds for the VC dimension of recurrent neural networks, which is a theoretical concept in machine learning. The paper also discusses the differences between recurrent and feedforward networks, which is a theoretical analysis.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms are not explicitly mentioned in the text, but the techniques presented in the paper (Operator Importance Analysis and Operator Interaction Analysis) are related to optimization and search, which are key components of genetic algorithms.   Probabilistic Methods are also not explicitly mentioned, but the techniques presented in the paper involve measuring and analyzing the likelihood of certain operators being useful or interacting with each other, which can be seen as probabilistic in nature. For example, Operator Importance Analysis determines the probability of certain operators being important for a given class of design problems.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses the use of Madaline-style networks, which are isomorphic to decision trees, for learning non-linearly separable boolean functions. The performance of this network is compared with standard BackPropagation on a sample problem.   Rule Learning: The paper investigates an algorithm for the construction of decision trees comprised of linear threshold units. Littlestone's Winnow algorithm is also explored within this architecture as a means of learning in the presence of many irrelevant attributes. The learning ability of this Madaline-style architecture on nonoptimal (larger than necessary) networks is also explored.
Probabilistic Methods. This paper belongs to the sub-category of probabilistic methods because it discusses the use of graphical models for causal inference and path analysis. The paper specifically mentions the use of recursive structural equations models, which are a type of probabilistic graphical model. The author also discusses the use of probability theory in modeling causal relationships between variables.
Neural Networks, Rule Learning.   Neural Networks: The paper is primarily focused on the generation of neural networks through the induction of binary trees of threshold logic units (TLUs). The authors describe the framework for their tree construction algorithm and how such trees can be transformed into an isomorphic neural network topology. They also examine several methods for learning the linear discriminant functions at each node of the tree structure and show that it is possible to simultaneously learn both the topology and weight settings of a neural network simply using the training data set that they are given.  Rule Learning: The paper discusses the construction of decision trees using threshold logic units (TLUs) and compares the accuracy of this method to classical information theoretic methods for constructing decision trees (which use single feature tests at each node). The authors show that their TLU trees are smaller and thus easier to understand than classical decision trees. They also discuss methods for learning the linear discriminant functions at each node of the tree structure, which can be seen as learning rules for making decisions based on the input features.
Neural Networks.   Explanation: The paper discusses experiments with the Cascade-Correlation Algorithm, which is a type of neural network. The authors explore the effectiveness of this algorithm in solving various problems, such as function approximation and pattern recognition. The paper also discusses the architecture and training process of the neural network, as well as the results of the experiments. Therefore, this paper belongs to the sub-category of AI known as Neural Networks.
Theory.   Explanation: The paper focuses on the theoretical problem of learning DNF formulae in the mistake-bound and PAC models, and develops a new approach called polynomial explainability. It also applies the DNF results to the problem of learning visual concepts and discusses the robustness of the results under various types of noise. While the paper mentions some machine learning techniques such as k-DNF and k-term-DNF, it does not use them as the main approach for learning. Therefore, the paper is primarily focused on theoretical analysis and belongs to the sub-category of AI theory.
Probabilistic Methods.   Explanation: The paper presents a general Bayesian framework for plan recognition that accounts for context and mental state of the agent. The approach is based on reasoning evidentially from observations of the agent's actions to assess the plausibility of the various candidates. The authors demonstrate the approach on a problem in traffic monitoring, where the objective is to induce the plan of the driver from observation of vehicle movements.
Probabilistic Methods.   Explanation: The paper presents a parallel algorithm for exact probabilistic inference in Bayesian networks. The entire paper is focused on probabilistic methods for inference in Bayesian networks.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper investigates a frequentist connection between empirical data and convex sets of probability distributions, which is a probabilistic approach to inference and decision-making. The paper also presents new asymptotic convergence results paralleling the laws of large numbers in probability theory.  Theory: The paper presents a framework for describing a sequence of random outcomes as being drawn from a convex set of distributions, rather than just from a single distribution. The paper also compares this approach with approaches based on prior subjective constraints, which is a theoretical comparison.
Probabilistic Methods.   Explanation: The paper discusses traditional techniques of reason maintenance, which are based on probabilistic methods for belief revision. The authors propose revision methods that aim to revise only those beliefs and plans worth revising, and to tolerate incoherence and ungroundedness when these are judged less detrimental than a costly revision effort. The authors also use an artificial market economy in planning and revision tasks to arrive at overall judgments of worth, which is a probabilistic approach to decision-making.
Neural Networks.   Explanation: The paper specifically discusses the development and application of a feed-forward neural network method for reconstructing the invariant mass of hadronic jets. The use of a neural network is the main focus of the paper and is the primary AI technique utilized.
Neural Networks, Theory.   Neural Networks: The paper discusses a new class of computing models called ASOCS, which are high-speed, self-adaptive, and massively parallel. These models are based on the concept of neural networks, which are a type of AI that is modeled after the structure and function of the human brain.  Theory: The paper discusses the development of a new ASOCS model called DNA, which does not depend on a hierarchical node structure for success. The paper also discusses three areas of the DNA model, including its flexible nodes, how it overcomes problems with allocating unused nodes, and how it operates during processing and learning. These discussions are all related to the theoretical development of the DNA model.
Case Based, Probabilistic Methods  Explanation:  - Case Based: The paper presents an approach to mobile robot path planning using case-based reasoning. The case-base stores the paths and the information about their traversability, and while planning the route, those paths are preferred that according to the former experience are least risky. This is a clear example of case-based reasoning. - Probabilistic Methods: Although not explicitly mentioned, the approach presented in the paper involves using information about the traversability of paths to make probabilistic decisions about which paths to take. The paths that are least risky according to former experience are preferred, which implies that some sort of probabilistic reasoning is being used to evaluate the risk associated with each path.
Genetic Algorithms.   Explanation: The paper explicitly mentions that the study adopts an evolutionary approach in which strategies and tactics correspond to the genetic material in a genetic algorithm. The experiments conducted aim to determine the most successful strategies and to see how and when these strategies evolve depending on the context and negotiation stance of the agent's opponent. Therefore, the paper primarily belongs to the sub-category of AI known as Genetic Algorithms.
Probabilistic Methods.   Explanation: The paper focuses on Bayesian estimation and model choice in item response models, which are probabilistic models used in psychometrics to analyze responses to test items. The authors use Bayesian methods to estimate model parameters and compare different models based on their posterior probabilities. The paper also discusses the use of prior distributions and Markov chain Monte Carlo (MCMC) methods in Bayesian inference. There is no mention of any other sub-category of AI in the text.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the need for agents to make decisions based on coherent expectations and preferences, which is a key aspect of decision theory and probabilistic reasoning. The paper also mentions the use of an artificial market economy for arriving at overall judgments of worth, which can be seen as a probabilistic approach to decision-making.  Theory: The paper discusses the theoretical foundations of rational planning and replanning, including the need to revise plans incrementally and locally based on expected utility, and the importance of identifying and revising only those parts of a plan that are worth revising. The paper also discusses the need for a representation of qualitative preferences, which is a theoretical concept in decision theory.
Probabilistic Methods.   Explanation: The paper discusses the naive Bayesian classifier, which is a probabilistic method for classification. The authors propose an extension to this method to address its limitations, and they evaluate their approach on several natural domains. The paper also briefly mentions other approaches to extending naive Bayesian classifiers, which may include other probabilistic methods. The other sub-categories of AI listed (Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, Theory) are not directly relevant to the content of this paper.
Probabilistic Methods.   Explanation: The paper presents a novel induction algorithm for Bayesian networks, which is a probabilistic method used for modeling uncertain relationships between variables. The paper compares the performance of this algorithm with selective and non-selective naive Bayesian classifiers, which are also probabilistic methods. The focus of the paper is on improving the accuracy and computational efficiency of Bayesian network classifiers, which are based on probabilistic reasoning. Therefore, the paper belongs to the sub-category of AI known as Probabilistic Methods.
Theory. This paper presents a theoretical method for function estimation using wavelet shrinkage. It does not involve any of the other sub-categories of AI listed.
Genetic Algorithms, Theory.   Genetic Programming is a subfield of Genetic Algorithms, which is mentioned in the title of the paper. The paper discusses the use of genetic programming to evolve data structures, which is a specific application of genetic algorithms.   The paper also delves into the theoretical aspects of genetic programming, such as the use of fitness functions and the role of crossover and mutation operators. Therefore, the paper also falls under the category of Theory.
Theory.   Explanation: The paper presents a theoretical framework for understanding the correlations in stochastic neural networks. While the paper discusses the dynamics of the network and the interactions between neurons, it does not involve any specific application of AI techniques such as case-based reasoning, genetic algorithms, reinforcement learning, or rule learning. The paper is focused on developing a theoretical understanding of the behavior of neural networks, rather than on applying specific AI techniques to solve a particular problem.
Probabilistic Methods.   Explanation: The paper discusses the use of the Dirichlet Process Prior in Bayesian nonparametric inference, which is a probabilistic method. The authors also mention other probabilistic models and methods, such as the Chinese Restaurant Process and the Polya Urn Scheme.
Neural Networks, Theory.   Neural Networks: The paper discusses feedforward neural networks and their properties, including the problem of interference and the concept of spatially local networks. It also analyzes sigmoidal multi-layer perceptron (MLP) networks that employ the back-prop learning algorithm.   Theory: The paper develops a theoretical framework consisting of a measure of interference and a measure of network localization, which incorporates not only the network weights and architecture but also the learning algorithm. It also addresses a familiar misconception about sigmoidal networks and demonstrates that they can be made arbitrarily local while retaining the ability to represent any continuous function on a compact domain.
Neural Networks.   Explanation: The paper presents a modular neural network composed of two expert networks and one mediating gate network with the task of learning to recognize faces and classify nonface objects. The paper discusses how the network tends to divide labor between the two expert modules, with one expert specializing in face processing and the other specializing in nonface object processing. The paper also discusses how the network's performance on face recognition decreases dramatically as one of the experts is progressively damaged, which is similar to data reported for prosopagnosic patients. Therefore, the paper primarily belongs to the sub-category of Neural Networks.
Neural Networks, Theory.   Neural Networks: The paper describes a neural network model called LISSOM that is used to model cortical plasticity. The model is based on the self-organization of afferent and lateral connections in cortical maps.   Theory: The paper presents a theoretical framework for understanding cortical plasticity and suggests techniques to hasten recovery following sensory cortical surgery. The LISSOM model predicts that adapting lateral interactions are fundamental to cortical reorganization.
This paper belongs to the sub-categories of Genetic Algorithms and Reinforcement Learning.   Genetic Algorithms: The paper discusses the use of genetic programming to evolve filter functions for the game of Go. The authors use a genetic algorithm to evolve a set of filters that can be used to evaluate the strength of a given move. The fitness function used in the genetic algorithm is based on the performance of the filters in a game of Go.  Reinforcement Learning: The paper also discusses the use of reinforcement learning to train a neural network to play Go. The authors use a variant of Q-learning to train the network, with the goal of maximizing the expected reward over a game of Go. The network is trained using a dataset of expert moves, and the authors show that the trained network is able to play at a high level.
Probabilistic Methods.   Explanation: The paper discusses the use of probability distributions to represent a machine learning algorithm's bias over the hypothesis and instance space. It introduces stochastic logic programs as a means of providing a structured definition of such a probability distribution. The paper also discusses how these probabilities can be used to guide the search in an Inductive Logic Programming (ILP) system and how they can be used to measure the generality of hypotheses in the ILP system Progol4.2. All of these concepts are related to probabilistic methods in AI.
Rule Learning, Theory.   Explanation:  - Rule Learning: The paper presents a novel approach to learning first order logic formulae, which corresponds to rule learning.  - Theory: The paper discusses the use of interpretations as examples, which are true or false for the target theory. The paper also presents a clausal representation, which corresponds to a conjunctive normal form where each conjunct forms a constraint on positive examples. These aspects relate to the theoretical foundations of the proposed approach.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper uses neural networks as the basis functions for the system dynamics.   Probabilistic Methods: The paper reports the result of a Monte Carlo study on the probability of chaos in large dynamical systems. The study involves randomly choosing parameter values for the networks, which is a probabilistic method. The conclusion drawn from the study is also probabilistic in nature, stating that most large systems are chaotic.
Genetic Algorithms.   Explanation: The paper presents a single uniform approach using genetic programming for the automatic synthesis of analog circuits. The approach involves the use of genetic algorithms to evolve the topology and sizing of the circuits. The paper does not mention any other sub-category of AI.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses a non-linear feedforward network algorithm for blind signal processing from an information maximization viewpoint.   Probabilistic Methods: The paper discusses the maximum likelihood algorithm for the optimization of a linear generative model. The paper also gives a partial proof of the 'folk-theorem' that any mixture of sources with high-kurtosis histograms is separable by the classic ICA algorithm, which is a probabilistic method.
Probabilistic Methods.   Explanation: The paper presents an expectation-maximization (EM) algorithm for principal component analysis (PCA) and a new variant of PCA called sensible principal component analysis (SPCA) which defines a proper density model in the data space. Both PCA and SPCA are probabilistic methods that involve modeling the covariance of datasets and finding the leading eigenvectors. The EM algorithm is a probabilistic method for estimating the parameters of a statistical model when there are missing data. Therefore, this paper belongs to the sub-category of AI known as Probabilistic Methods.
Probabilistic Methods.   Explanation: The paper presents new algorithms for parameter estimation of Hidden Markov Models (HMMs) using a framework based on maximizing the likelihood of observations. The proposed algorithms are similar to the EM (Baum-Welch) algorithm, which is a probabilistic method commonly used for training HMMs. The paper also uses a distance measure based on relative entropy between two HMMs, which is a probabilistic concept. Therefore, the paper belongs to the sub-category of Probabilistic Methods in AI.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper investigates the distribution of performance of Boolean functions using enumeration and Monte-Carlo random sampling, which are common techniques in genetic algorithms. The paper also discusses the fitness distributions of full trees and asymmetric trees, which are relevant to genetic programming.  Theory: The paper discusses the distribution of performance of Boolean functions and considers the No Free Lunch (NFL) theorems, which are theoretical concepts in machine learning. The paper also analyzes the fitness distributions of different types of trees, which is a theoretical aspect of genetic programming.
Probabilistic Methods.   Explanation: The paper analyzes the convergence to stationarity of a non-reversible Markov chain, which is a probabilistic method commonly used in sampling. The analysis uses probabilistic techniques and an explicit diagonalization.
Neural Networks, Theory.   Neural Networks: The paper proposes a neural architecture for storage and recall of information based on both content and address. The architecture is based on a combination of autoassociative and heteroassociative neural networks. The authors also discuss the use of backpropagation for training the network.   Theory: The paper presents a theoretical framework for the proposed neural architecture, including mathematical equations and diagrams. The authors also discuss the potential applications of the architecture, such as in cognitive psychology and artificial intelligence.
Probabilistic Methods.   Explanation: The paper presents a probabilistic approach to principal component analysis (PCA) by formulating it within a maximum-likelihood framework based on a specific form of Gaussian latent variable model. The resulting model is a mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. The paper discusses the advantages of this model in the context of clustering, density modelling, and local dimensionality reduction, and demonstrates its application to image compression and handwritten digit recognition. There is no mention of other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Neural Networks, Theory.   Neural Networks: The paper discusses the early stopping technique in linear networks, which is a common method used in training neural networks to prevent overfitting. The authors analyze the geometry of the early stopping process and provide insights into the behavior of the network during training.  Theory: The paper presents a theoretical analysis of the early stopping technique in linear networks. The authors derive mathematical expressions for the optimal stopping point and the generalization error of the network. They also provide geometric interpretations of the early stopping process and its relationship to the network's weight space.
Genetic Algorithms.   Explanation: The paper describes the use of genetic programming to evolve programs for optimized maneuvers in a two-dimensional space. The programs are evolved using fixed and randomly-generated fitness cases, which is a characteristic of genetic algorithms. The paper also discusses the implementation of the genetic programming system and the results of testing the evolved programs, further supporting the use of genetic algorithms in this research.
Genetic Algorithms.   Explanation: The paper describes a design process that uses genetic programming, which is a sub-category of AI that involves using evolutionary algorithms to solve problems. The process involves creating a population of program trees, evaluating their fitness based on user-defined criteria, and using genetic operations such as reproduction, crossover, and mutation to create offspring that are better suited to the design requirements. This process is characteristic of genetic algorithms, which use principles of natural selection and genetics to optimize solutions to complex problems.
Neural Networks, Theory.   Neural Networks: The paper discusses computational models of neural map formation, which are based on neural activity dynamics and weight dynamics. The authors present an example of how an optimization problem can be derived from detailed non-linear neural dynamics, and how different weight dynamics can be derived from two types of objective function terms and two types of constraints.   Theory: The paper presents a framework for constrained optimization for neural map formation, which includes an objective function and constraints from which weight growth and normalization rules can be derived. The authors investigate how different weight dynamics can be derived from the same optimization problem, and how coordinate transformations play a role in this process. The paper also discusses how the constrained optimization framework can help in understanding, generating, and comparing different models of neural map formation.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper proposes a unit that adds Gaussian noise to its input before passing it through a sigmoidal squashing function. The resulting behavior can be deterministic, binary stochastic, or continuous stochastic. The paper also describes how "slice sampling" can be used for inference and learning in top-down networks of these units.   Neural Networks: The paper proposes a simple unit that adds Gaussian noise to its input before passing it through a sigmoidal squashing function. These units can be used to model latent structure that explains correlations among observed variables. The paper also demonstrates learning on two simple problems using these units.
Probabilistic Methods.   Explanation: The paper discusses the use of Bayesian networks, which are a type of probabilistic graphical model, and proposes a new approach for sequential update of both the parameters and structure of these networks. The paper also describes modifications to the scoring functions used for learning Bayesian networks.
Neural Networks.   Explanation: The paper describes a hypothetical cortical architecture for visual object recognition based on a computational model that relies on modules for learning from examples, consistent with the usual description of cortical neurons as tuned to multidimensional optimal stimuli. The model is a Memory-Based Model (MBM) that contains sparse population coding, memory-based recognition, and codebooks of prototypes. The units of MBMs are consistent with the usual description of cortical neurons as tuned to multidimensional optimal stimuli. The paper describes how an example of MBM may be realized in terms of cortical circuitry and biophysical mechanisms, consistent with psychophysical and physiological data. The paper also makes a number of predictions, testable with physiological techniques.
This paper belongs to the sub-category of AI called Genetic Algorithms.   Explanation:  The paper discusses the concept of "collective adaptation" which involves the sharing of building blocks among individuals in a population. This is similar to the process of crossover in genetic algorithms, where genetic material is exchanged between individuals to create new offspring with desirable traits. The paper also mentions the use of fitness functions to evaluate the performance of individuals in the population, which is a key component of genetic algorithms. Therefore, this paper is most related to the sub-category of AI called Genetic Algorithms.
Probabilistic Methods.   Explanation: The paper discusses the use of qualitative probabilistic relationships among variables for computing bounds of conditional probability distributions in Bayesian networks. The focus is on using probabilistic methods to obtain monotonically tightening bounds that converge to exact distributions.
Probabilistic Methods, Theory.   This paper belongs to the sub-category of Probabilistic Methods because it discusses item response models, which are probabilistic models used in psychometrics to analyze responses to test items. The paper also uses Bayesian methods to estimate the parameters of the models.   It also belongs to the sub-category of Theory because it explores the concept of monotonicity in item response models, which is a theoretical property of the models. The paper discusses both latent and manifest monotonicity, which are theoretical concepts related to the ordering of item difficulties and the ordering of item responses, respectively.
Probabilistic Methods, Reinforcement Learning, Theory.   Probabilistic Methods: The paper proposes an algorithm for minimizing an error function associated with a set of highly structured linear inequalities. The error function has a Lipschitz continuous gradient that allows the use of fast serial and parallel unconstrained minimization algorithms.   Reinforcement Learning: The paper does not explicitly mention reinforcement learning, but the proposed algorithm can be seen as a form of supervised learning, which is a type of machine learning that includes reinforcement learning.   Theory: The paper presents a theoretical framework for multicategory discrimination and proposes an algorithm based on minimizing an error function associated with a set of highly structured linear inequalities. The paper also discusses the computational complexity of the problem and presents preliminary computational results.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper presents a large and systematic body of data on the relative effectiveness of mutation, crossover, and combinations of mutation and crossover in genetic programming (GP). The literature of traditional genetic algorithms contains related studies, but mutation and crossover in GP differ from their traditional counterparts in significant ways.   Theory: The resulting data may be useful not only for practitioners seeking to optimize parameters for GP runs, but also for theorists exploring issues such as the role of building blocks in GP.
Probabilistic Methods.   Explanation: The paper discusses Markov chain Monte Carlo methods, which are a type of probabilistic method used in Bayesian inference and statistical physics. The paper specifically focuses on improving the efficiency of these methods by suppressing random walks through the use of ordered overrelaxation.
Probabilistic Methods.   Explanation: The paper discusses various optimum decision rules for pattern recognition, including Bayes rule and Chow's rule, which are both probabilistic methods. The newly proposed class-selective rejection rule also involves probabilistic reasoning, as it aims to find an optimum tradeoff between the error rate and the average number of selected classes. The paper presents a functional relation between the recognition error and the class-selective reject function, which can be estimated from unlabelled patterns using probabilistic methods. Therefore, the paper is primarily focused on probabilistic methods for pattern recognition.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper utilizes collective memory, which is a probabilistic method, to integrate weak and strong search heuristics. Each weak heuristic maintains a local cache of the collective memory, which is used to guide the search. The impact of the distribution of the collective memory on the distributed search is also examined.  Theory: The paper presents a theoretical framework for integrating weak and strong search heuristics using collective memory to solve a hard combinatorial optimization problem. The authors construct a family of graphs, FC, to demonstrate the effectiveness of their approach. They also analyze the impact of various characteristics of the distribution of the collective memory and the search algorithms on the distributed search.
Rule Learning, Theory.   Rule Learning is the most related sub-category as the paper focuses on the integration of knowledge acquisition and machine learning techniques, specifically on the extension of FOCL, a multistrategy Horn-clause learning program, to enhance its power as a knowledge acquisition tool. The paper also emphasizes the importance of maintaining a connection between a rule and the set of examples explained by the rule.   Theory is also relevant as the objective of the research is to make the modification of a domain theory analogous to the use of a spread sheet. The paper describes the development of a prototype knowledge acquisition tool, FOCL-1-2-3, to evaluate the approach.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the extension of a likelihood or preference order on worlds to a likelihood ordering on sets of worlds, which is a probabilistic approach to reasoning.   Theory: The paper provides an axiomatization of the logic of relative likelihood in the case of partial orders, which is a theoretical framework for reasoning about relative likelihood. The paper also discusses the connection between relative likelihood and default reasoning, which is a theoretical topic in AI.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the behavior of hill-climbing search for solving Boolean satisfiability problems, which involves making probabilistic choices at each step of the search. The paper also proposes a run-time heuristic to determine when to give up searching a plateau and restart, which is based on empirical observations of the search space and the behavior of the search algorithm.  Theory: The paper investigates the properties of the search space and the behavior of hill-climbing search for solving hard, random Boolean satisfiability problems. The paper also determines the optimum point to terminate search and restart empirically over a range of problem sizes and complexities, and proposes a simple run-time heuristic based on these empirical results. The paper does not involve the implementation or application of AI techniques, but rather focuses on the theoretical analysis of a specific search algorithm for a specific problem domain.
Neural Networks.   Explanation: The paper is specifically focused on neural networks and their input representations. The authors introduce fast quality measures for neural network representations and compare them to a previously published measure. The entire paper is centered around improving the accuracy of neural networks through better input representations.
Theory.   Explanation: The paper discusses theoretical results on controllability properties of discrete-time nonlinear systems, without using any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. Therefore, the paper belongs to the sub-category of AI theory.
Theory.   Explanation: The paper proposes an algorithm for solving systems of monotone equations using a combination of Newton, proximal point, and projection methodologies. The focus is on the theoretical properties of the algorithm, such as global convergence without additional regularity assumptions and achieving local superlinear rate of convergence under standard assumptions. There is no mention of any specific application or use of AI sub-categories such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper discusses the use of genetic programming, which is a type of genetic algorithm, and how it can be improved through the addition of cultural transmission of information.  Reinforcement Learning: The paper also discusses the use of genetic programming on Wumpus world agent problems, which are a type of reinforcement learning problem. The addition of cultural transmission of information is shown to improve the performance of the genetic programming system on these problems.
Case Based, Rule Learning.   Case Based: The paper reviews a large number of CBR (Case-Based Reasoning) systems to determine when and what sort of adaptation is currently used. The paper proposes an adaptation-relevant taxonomy of CBR systems, a taxonomy of the tasks performed by CBR systems, and a taxonomy of adaptation knowledge.   Rule Learning: The paper proposes a taxonomy of adaptation knowledge, which can be seen as a form of rule learning. The paper suggests that the partition of CBR systems and the division of adaptation knowledge may be useful for the CBR system designer.
Neural Networks.   Explanation: The paper discusses constructive learning algorithms for artificial neural networks, specifically multilayer networks of threshold logic units or multilayer perceptrons. The focus is on the topology of the networks and how it biases the search for a decision boundary that correctly classifies the training set. The paper also suggests the possibility of designing more efficient constructive algorithms for pattern classification. There is no mention of any other sub-category of AI in the text.
Theory. This paper is a theoretical study of information theory and its application to molecular biology. It does not involve any practical implementation of AI techniques such as case-based reasoning, genetic algorithms, neural networks, reinforcement learning, or rule learning.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as it discusses using TD learning to learn models of the world's dynamics for use in model-based reinforcement learning architectures and dynamic programming methods.   Theory is also a significant aspect of the paper, as it establishes the theoretical foundations of multi-scale models and derives TD algorithms for learning them.
Probabilistic Methods.   Explanation: The paper discusses two different techniques for generating software pipelines, one of which is heuristic and the other is based on integrated integer linear programming (ILP). The ILP technique is described as aiming to produce optimal results, which suggests a probabilistic approach to finding the best solution. However, the paper does not mention any other sub-categories of AI such as case-based reasoning, genetic algorithms, neural networks, reinforcement learning, or rule learning.
This paper does not belong to any of the sub-categories of AI listed. It is a paper on computer architecture and compiler construction, specifically on speculative execution and exception handling in superscalar processors. There is no mention or application of AI techniques in the text.
Reinforcement Learning, Rule Learning.   Reinforcement learning is the main focus of the paper, as it is used for the tuning of fuzzy control rules. The paper explores a simplified method of using reinforcement learning for this purpose.   Rule learning is also present, as the paper discusses the generation and tuning of fuzzy rules for control. The goal is to generate rules that provide smooth control, and reinforcement learning is used to achieve this.
This paper belongs to the sub-categories of AI: Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper discusses the use of genetic algorithms for automatic generation of adaptive programs. It describes how genetic algorithms can be used to evolve programs that can adapt to changing environments.  Reinforcement Learning: The paper also discusses the use of reinforcement learning for automatic generation of adaptive programs. It describes how reinforcement learning can be used to train programs to learn from their environment and adapt their behavior accordingly.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper presents an algorithm called Adaptive Representation through Learning (ARL), which is a genetic programming extension that relies on the discovery of subroutines. ARL uses genetic operators such as mutation and crossover to evolve the procedural representations of control policies.   Reinforcement Learning: The paper discusses a typical reinforcement learning problem of controlling an agent in a dynamic and nondeterministic environment. ARL is used to construct policies for this problem by discovering subroutines that correspond to agent behaviors. The paper also mentions the advantages of procedural representations in reinforcement learning tasks.
Theory. The paper proposes a modification of the classical proximal point algorithm for finding zeroes of a maximal monotone operator in a Hilbert space. The authors establish weak global convergence and local linear rate of convergence under suitable assumptions. The analysis presented in the paper yields an alternative proof of convergence for the exact proximal point method, which allows a nice geometric interpretation and is somewhat more intuitive than the classical proof. There is no mention or application of any of the other sub-categories of AI listed.
This paper does not belong to any of the sub-categories of AI listed. It is focused on a framework for integrating register allocation and instruction scheduling, which is a technique in computer architecture and compiler design. There is no explicit mention or application of any AI sub-category in the text.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper introduces a new model of distributions generated by random walks on graphs, which is a probabilistic method. The learning problems suggested by this model also use the definitions and models of distribution learning defined in [6], which are probabilistic in nature.  Theory: The paper presents a framework that is general enough to model previously studied distribution learning problems, as well as to suggest new applications. It also investigates the relative difficulty of special cases of the general problem and presents algorithms to solve the learning problem under various conditions. These aspects of the paper are related to the theoretical aspects of AI.
Rule Learning, Case Based  Explanation:   The paper describes the AQDT-2 system, which is a rule learning system that learns decision structures from decision rules. This falls under the sub-category of AI known as Rule Learning. The paper also mentions the use of case-based reasoning in the system, which falls under the sub-category of AI known as Case Based.
Rule Learning, Theory.   Explanation:  - Rule Learning: The paper presents a new constructive induction algorithm that constructs new nominal attributes in the form of Xof-N representations. This algorithm can be seen as a rule learning method, as it creates new rules (Xof-N representations) based on the existing attributes of the data.  - Theory: The paper discusses the performance of the XofN algorithm in terms of both prediction accuracy and theory complexity. It also presents experimental results to support its claims. Therefore, the paper can be seen as contributing to the theoretical understanding of constructive induction algorithms.
Genetic Algorithms, Neural Networks, Probabilistic Methods.   Genetic Algorithms: The paper discusses the use of gene duplication in promoting coevolution.   Neural Networks: The paper discusses the coevolution of eyes and brains within each simulated species.   Probabilistic Methods: The paper discusses the probability of co-evolution producing good pursuers and good evaders through a pure bootstrapping process.
Neural Networks. This paper belongs to the sub-category of Neural Networks as it introduces a novel method called "Long Short-Term Memory" (LSTM) for learning to store information over extended time intervals via recurrent backpropagation. The paper compares LSTM with other recurrent network algorithms and shows that LSTM leads to many more successful runs and learns much faster. The paper also discusses the computational complexity of LSTM and its ability to solve complex, artificial long time lag tasks.
Neural Networks, Theory.  Neural Networks: The paper discusses the Perceptron learning method, which is a type of neural network.  Theory: The paper introduces and defines the concept of geometric separability, which is a theoretical concept related to learning methods.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper describes a chip that is inspired by a visual motion detection model for the rabbit retina and a computational architecture used for early audition in the barn owl. These models are based on neural networks that process sensory information. The chip itself employs a correlation model, which is a type of neural network that is commonly used for motion detection.  Probabilistic Methods: The chip uses subthreshold analog VLSI techniques, which are probabilistic in nature. These techniques allow for the chip to operate at low power and with high efficiency. Additionally, the chip reports the one-dimensional field motion of a scene in real time, which requires probabilistic methods to estimate the motion of objects in the scene.
Reinforcement Learning, Neural Networks.   Reinforcement learning is the main focus of the paper, as it discusses the task of discovering and remembering input-output pairs that generate rewards. The paper also describes a neural network algorithm called complementary reinforcement back-propagation (CRBP) for this task.
Neural Networks, Theory.  Explanation:  - Neural Networks: The paper presents an experiment in which a neural network CDM was learnt for a Japanese OCR environment and then used to do 1-NN classification. - Theory: The paper proves that the Canonical Distortion Measure (CDM) is the optimal distance measure to use for 1 nearest-neighbour (1-NN) classification, and gives PAC-like bounds on the sample-complexity required to learn the CDM. The paper also shows that the CDM reduces to squared Euclidean distance in feature space for function classes that can be expressed as linear combinations of a fixed set of features.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses the schema theorem, which is a fundamental concept in genetic algorithms. It also proposes a method for chromosomes to vote for candidate schemata, which is a key aspect of genetic algorithms.  Theory: The paper presents a theoretical framework for understanding the role of schemata in genetic algorithms. It also proposes a new approach for using schemata to indirectly solve a problem domain.
Probabilistic Methods.   Explanation: The paper discusses Bayesian methodology, which is a probabilistic approach to statistical data analysis. The focus of the paper is on Markov chain Monte Carlo (MCMC) methods, which are intensive simulation techniques used to perform Bayesian analysis when the evaluation of Bayes posterior distribution is difficult. The paper also discusses model selection, which is a common problem in probabilistic methods.
Genetic Algorithms, Neural Networks, Reinforcement Learning.   Genetic Algorithms: The papers [1], [2], [3], [4], [5], [6], and [7] all discuss the use of genetic algorithms in various aspects of AI, such as training neural networks, feature selection, and automatic design of cellular neural networks.   Neural Networks: The papers [1], [2], [3], [4], and [5] all discuss the use of neural networks in conjunction with genetic algorithms or reinforcement learning.   Reinforcement Learning: The paper [5] specifically discusses the use of symbiotic evolution for efficient reinforcement learning.
This paper belongs to the sub-category of AI called Neural Networks. Neural networks are present in the paper as the unsupervised neural networks are used for data mining to discover association rules.
Reinforcement Learning, Rule Learning.   Reinforcement learning is the main focus of the paper, as it discusses the problem of an agent learning to act in the world through trial and error. The paper also explores the strategy of finding restricted classes of action policies that can be learned more efficiently, which falls under the category of rule learning. The algorithms developed in the paper are designed to learn action maps that are expressible in k-DNF, which is a type of rule-based representation.
Case Based, Neural Networks  Explanation:  - Case Based: The paper describes a method that combines Case-Based Reasoning with connectionist learning procedures to automatically learn or adjust similarity measures. - Neural Networks: The paper specifically mentions using Hebbian learning, which is a type of connectionist learning procedure commonly used in neural networks. The paper also mentions combining these ideas with a Case-Based Reasoning engine.
Probabilistic Methods.   Explanation: The paper discusses conditions for the non-existence of central limit theorems for ergodic averages of functionals of a Markov chain, specifically in the context of Metropolis-Hastings algorithms. This involves probabilistic methods such as analyzing the probability of remaining in the current state and rejection probabilities. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods, Theory  Probabilistic Methods: The paper discusses the convergence properties of hybrid samplers, which are probabilistic methods used for sampling from complex distributions. The authors analyze the convergence of two different hybrid samplers and provide theoretical results on their convergence rates.  Theory: The paper presents theoretical results on the convergence properties of hybrid samplers. The authors derive bounds on the convergence rates of the samplers and provide proofs for their results. The paper also discusses the implications of these theoretical results for practical applications of hybrid samplers.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as the goal is to develop a reinforcement learning system with limited computational resources that can interact with an unknown environment and maximize cumulative reward. The paper also presents a novel measure for evaluating performance improvements in reinforcement learning, called the "reinforcement acceleration criterion" (RAC), and a method called "environment-independent reinforcement acceleration" (EIRA) that is guaranteed to achieve RAC.   Theory is also a relevant sub-category, as the paper presents a sound theoretical framework for meta-learning and multi-agent learning, based on the principles of reinforcement acceleration and the EIRA method. The paper also discusses the limitations of existing reinforcement learning algorithms and the challenges of policy modification processes in unknown environments, highlighting the need for a new approach.
Genetic Algorithms.   Explanation: The paper provides an overview of evolutionary computation, which is a subfield of AI that uses computational models of evolutionary processes. The paper specifically describes several evolutionary algorithms, which are a type of genetic algorithm. The other sub-categories of AI listed (Case Based, Neural Networks, Probabilistic Methods, Reinforcement Learning, Rule Learning, Theory) are not directly related to the content of the paper.
Neural Networks.   Explanation: The paper presents a feed-forward computational model of visual processing, which is a type of neural network. The model consists of two competing modules that classify input stimuli based on their spatial frequency information. The paper discusses how this model can explain the specialization of face processing in the brain without the need for an innately-specified face processing module.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses a method for Markov chain Monte Carlo, which is a probabilistic method for sampling from a probability distribution.   Theory: The paper presents a general method for proving rigorous, a priori bounds on the number of iterations required to achieve convergence of Markov chain Monte Carlo. It also describes bounds for specific models of the Gibbs sampler and discusses possibilities for obtaining bounds more generally.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses the use of Bayesian networks with tree-structured conditional probability tables to represent action descriptions.   Reinforcement Learning: The paper discusses the use of Markov decision processes (MDPs) as the model of choice for decision theoretic planning, and describes an algorithm, structured policy construction, that aims to make their solution more tractable. The paper also introduces a new decision theoretic regression operator to correct a weakness in the algorithm.
Genetic Algorithms, Reinforcement Learning  Explanation:  - Genetic Algorithms: The paper mentions using a "simple genetic programming system" to solve the problem of programming an artificial ant to follow the Santa Fe trail. The authors also suggest that the problem is difficult for Genetic Algorithms due to "multiple levels of deception". - Reinforcement Learning: The paper discusses redefining the problem so that the ant is "obliged to traverse the trail in approximately the correct order", which suggests a reinforcement learning approach where the ant is rewarded for following the correct order. However, the paper does not explicitly mention reinforcement learning as a technique used.
Genetic Algorithms, Rule Learning.   Genetic Algorithms: The paper describes a polymorphic GP system which uses genetic programming to generate programs.   Rule Learning: The paper discusses how the use of type information can be used to reduce the search space and improve performance in GP. The system described in the paper generates programs based on rules and constraints imposed by the type information.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses the naive Bayesian classification method, which is a probabilistic method of learning. It also mentions that naive Bayesian classification is a nonparametric, nonlinear generalization of logistic regression.  Neural Networks: The paper shows that boosting applied to naive Bayesian classifiers yields combination classifiers that are representationally equivalent to standard feedforward multilayer perceptrons, which are a type of neural network. It also discusses the advantages of boosted naive Bayesian learning over backpropagation, which is a common neural network training algorithm.
Case Based, Rule Learning  Explanation:  This paper belongs to the sub-category of Case Based AI because it focuses on improving the performance of case-based learning algorithms. The paper presents a baseline information-gain-weighted CBL algorithm and then proposes two variations of the algorithm that create test-case-specific feature weights to improve the performance of minority class predictions.   This paper also belongs to the sub-category of Rule Learning because the proposed variations of the CBL algorithm create test-case-specific feature weights by observing the path taken by the test case in a decision tree created for the learning task and using path-specific information gain values to create an appropriate weight vector for use during case retrieval. This process involves learning rules from the decision tree and using them to create case-specific feature weights.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of the sum-product algorithm on the factor graph to approximate bit-wise maximum a posteriori decoding, which is a probabilistic method.  Theory: The paper presents a generalization of several existing codes and introduces two new families of codes, which involves theoretical analysis and development.
Case Based, Rule Learning  The paper belongs to the sub-category of Case Based AI because the authors are implementing a system that uses case-based reasoning to identify previous situations and explanations that could potentially affect the explanation being constructed. They are also using heuristics for constructing explanations that exploit this information in ways similar to what they have observed in instructional dialogues produced by human tutors.  The paper also belongs to the sub-category of Rule Learning because the authors have identified heuristics for constructing explanations that exploit the information from previous explanations. These heuristics can be seen as rules that the system follows to generate explanations.
Neural Networks.   Explanation: The paper discusses a learning algorithm for fully recurrent continually running networks, which are a type of neural network. The RTRL algorithm, which this paper improves upon, is also a neural network algorithm.
Probabilistic Methods  Explanation: The paper discusses the convergence rate of reversible Markov chains, which is a topic related to probabilistic methods in AI. The Cheeger's constant, which is used to bound the convergence rate, is a concept from probability theory. The paper does not discuss any other sub-categories of AI mentioned in the options.
Probabilistic Methods. This paper belongs to the sub-category of Probabilistic Methods in AI. The paper discusses the use of Markov chain Monte Carlo (MCMC) methods, which are probabilistic methods used for sampling from complex probability distributions. The paper proposes an estimator for the normalization constant of the target density function, which is a key component of MCMC methods. The paper also discusses the use of kernel estimators, which are probabilistic methods used for estimating probability density functions. Overall, the paper focuses on the use of probabilistic methods for monitoring the convergence of MCMC samplers.
Probabilistic Methods, Theory.   The paper introduces a new approach to modeling uncertainty based on plausibility measures, which is a type of probabilistic method. The paper also focuses on one application of plausibility measures in default reasoning, which is a theoretical aspect of AI.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the process of obtaining constructive beliefs by using manifest beliefs and preferences to rationally choose the most useful conclusions indicated by the manifest beliefs. This process involves probabilistic reasoning and decision-making.  Theory: The paper presents a theoretical perspective on the nature of belief and its relation to memory and decision-making. It argues for a view of belief as the result of rational representation, which is distinct from the traditional logical view of belief. The paper also discusses the limitations of the logical view and proposes a more illuminating alternative.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the basic notions of probability and how they influence the design and analysis of reasoning and representation systems.   Theory: The paper surveys the literature on how the economic theory of rationality influences reasoning and representation systems.
Probabilistic Methods, Rule Learning  The paper belongs to the sub-category of Probabilistic Methods because it proposes a new algorithm that uses a probabilistic approach to assemble DNA sequences. The algorithm uses statistical models to estimate the likelihood of different sequences and selects the most probable one as the final assembly.  The paper also belongs to the sub-category of Rule Learning because the algorithm uses a set of rules to guide the assembly process. The rules are based on the properties of DNA sequences and the characteristics of the sequencing data. The algorithm learns from the data and adjusts the rules accordingly to improve the accuracy of the assembly.
Probabilistic Methods.   The paper explicitly mentions the use of statistical clues in their approach to correctly assemble results even in the presence of extensive repetitive sequences. They also mention the robustness of their algorithm to noise and the presence of repetitive sequences, which suggests a probabilistic approach to handling uncertain data.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper mentions that the MUSIC supercomputer has been used for neural network simulation, indicating that this sub-category of AI is relevant to the research.  Probabilistic Methods: While not explicitly mentioned, the paper discusses the reduction in electric power requirements, weight, and price of the MUSIC system, which suggests that probabilistic methods may have been used to optimize the design and performance of the supercomputer.
Genetic Algorithms, Neural Networks.   The paper discusses the integration of a constraint logic programming system (CLP) with a system based on genetic algorithms (GA) for the purpose of training neural networks. The framework presented, CoCo, uses ECL i PS e to generate constraints and chromosome representations for the neural networks, and GENOCOP to find an optimal solution. This involves the use of genetic algorithms to optimize the error of the network, and the use of neural networks as the subject of the optimization.
Probabilistic Methods.   Explanation: The paper discusses a logical approach to reasoning about uncertainty, which involves the use of probability theory. The author discusses the use of Bayesian networks and probabilistic inference in this context. While other sub-categories of AI may also be relevant to this topic, such as rule learning or theory, the focus of the paper is on probabilistic methods.
Genetic Algorithms.   Explanation: The paper compares two different genetic algorithms (SAW-ing Evolutionary Algorithm and Grouping Genetic Algorithm) for solving the graph coloring problem. The paper discusses the implementation and performance of these algorithms, as well as their strengths and weaknesses. Therefore, the paper is primarily focused on genetic algorithms as a sub-category of AI.
Probabilistic Methods.   Explanation: The paper focuses on analyzing the geometric ergodicity of Gibbs and Block Gibbs samplers for a hierarchical random effects model. Gibbs sampling is a probabilistic method used for generating samples from a probability distribution, and the analysis of geometric ergodicity is a probabilistic concept related to the convergence of Markov chains. Therefore, this paper belongs to the sub-category of AI known as Probabilistic Methods.
Genetic Algorithms, Neural Networks, Theory.   Genetic Algorithms: The paper discusses the use of genetic algorithms as a global optimization method for training neural networks.   Neural Networks: The paper is primarily focused on training neural networks using regularities and constraints on the weights.   Theory: The paper introduces the concept of regularities and their use in expanding the search space for optimization. It also discusses the use of constraint logic programming for finding a satisfiable set of constraints.
Theory.   Explanation: The paper focuses on the theoretical study of PAC-learning algorithms for specialized classes of deterministic finite automata, specifically branching programs. The authors analyze the difficulty of the learning problem based on the width of the branching program and present distribution-free and uniform distribution algorithms for learning width-2 branching programs. They also explore the implications of an efficient algorithm for learning width-3 branching programs on the learnability of DNF and parity with noise. The paper does not involve any practical implementation or application of AI techniques, but rather focuses on the theoretical analysis of learning algorithms for a specific class of automata.
Theory.   Explanation: The paper discusses the theoretical analysis of the computational complexity of the Longest Common Subsequence problem and its implications for other sequence alignment and consensus problems. It does not involve the application of any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Rule Learning, Theory.   This paper belongs to the sub-category of Rule Learning because it focuses on decision tree learning, which is a type of rule-based learning. The paper proposes a method for constructing new attributes that can be used in decision tree learning, which is a key aspect of rule learning.   Additionally, the paper belongs to the sub-category of Theory because it presents a theoretical framework for constructing new attributes. The authors discuss the mathematical properties of their proposed method and provide proofs of its effectiveness. They also compare their method to existing approaches and analyze its performance in various scenarios.
Theory. This paper belongs to the sub-category of AI theory. The author applies basic concepts of Kolmogorov complexity theory to the set of possible universes and discusses perceived and true randomness, life, generalization, and learning in a given universe. The paper does not discuss any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods.   Explanation: The paper discusses the Metropolis-Hastings algorithm, which is a probabilistic method for estimating a distribution. The paper proposes a class of candidate distributions that "self-target" towards the high density areas of the target distribution, which improves the convergence rates of the algorithm. The paper also discusses examples of distributions with exponential and polynomial tails, and a logistic regression model using a Gibbs sampling algorithm, all of which are probabilistic models.
Probabilistic Methods.   Explanation: The paper discusses the predictability of data values using probabilistic methods such as Bayesian networks and Markov models. The authors analyze the accuracy of these methods in predicting data values and compare them to other approaches. While other sub-categories of AI may also be relevant to the topic, the focus on probabilistic methods is the most prominent in the paper.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods are present in the paper as the authors discuss the complexity of learning problems and the large variety of available techniques. They also mention the need to understand this complexity and construct a characterization of learning situations. This involves probabilistic modeling and inference.  Reinforcement Learning is present in the paper as the authors discuss the development of a decision-support system for marine propeller design. This involves using reinforcement learning to train the system to make decisions based on feedback from the environment. The authors also mention the need for future projects to record their successes, limitations, and failures, which is a key aspect of reinforcement learning.
Theory.   Explanation: The paper presents a theoretical result on the learnability of DNF under the uniform distribution, and proposes a learning algorithm based on this result. There is no mention or application of any specific AI sub-category such as neural networks, reinforcement learning, etc.
Rule Learning, Theory.   Explanation: The paper discusses the construction and comparison of different methods for forming multivariate decision trees, which falls under the sub-category of Rule Learning. The paper also discusses issues related to representing and learning multivariate tests, selecting features, and pruning decision trees, which are all theoretical aspects of machine learning.
Probabilistic Methods.   Explanation: The paper introduces a methodology called polyclass that uses adaptively selected linear splines and their tensor products to model conditional class probabilities. The authors also develop a modification to this methodology involving the use of the stochastic gradient method in fitting polyclass models to given sets of basis functions. The focus of the paper is on developing a methodology for modeling conditional class probabilities, which is a probabilistic approach. There is no mention of case-based reasoning, genetic algorithms, reinforcement learning, rule learning, or theory in the paper.
Reinforcement Learning.   Explanation: The paper presents a reinforcement learning algorithm called Nested Q-learning that generates a hierarchical control structure in reinforcement learning domains. The focus of the paper is on learning reactive/hierarchical relationships in reinforcement environments, which is a key aspect of reinforcement learning. None of the other sub-categories of AI are mentioned or discussed in the paper.
Reinforcement Learning, Theory.   Reinforcement learning is present in the paper as the algorithm presented is an on-line investment algorithm that learns from the market outcomes and adjusts the portfolio accordingly. The algorithm employs a multiplicative update rule derived using a framework introduced by Kivinen and Warmuth.   Theory is also present in the paper as the authors present a theoretical framework for their algorithm and provide mathematical proofs for its performance. They also compare their algorithm to other existing algorithms and provide theoretical explanations for the differences in performance.
Theory. This paper presents a formal account of belief revision operators and their semantics in an epistemic logic. It does not involve any implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods.   Explanation: The paper introduces a novel enhancement for learning Bayesian networks, which are a type of probabilistic graphical model used for probabilistic reasoning. The approach involves selecting a subset of features that maximize predictive accuracy prior to the network learning phase, and the paper explicitly examines the effects of feature selection and node ordering on the learning process. The goal is to construct networks that are simpler to evaluate but still have high predictive accuracy relative to networks that model all features. Therefore, the paper is primarily focused on probabilistic methods for learning Bayesian networks.
Reinforcement Learning.   Explanation: The paper focuses on improving Nested Q-learning (NQL) for learning hierarchical control structures in reinforcement environments. The simulation of a robot performing related tasks is used to compare hierarchical and non-hierarchical learning techniques in a reinforcement learning setting. Therefore, reinforcement learning is the most related sub-category of AI in this paper.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of an ensemble of networks to address overfitting, which is a common problem in neural network models. The title also mentions "Some Recent Experiments with Postal Zip Data," which suggests that the paper is exploring the use of neural networks for classification tasks.  Probabilistic Methods: The paper discusses the use of weight decay as a method for controlling the variance of a classifier, which is a probabilistic method commonly used in machine learning. The paper also mentions the need for optimal methods for training an ensemble of networks, which could involve probabilistic techniques such as Bayesian optimization.
Probabilistic Methods.   Explanation: The paper proposes a probabilistic method for classification using Gaussian processes. The authors use a variational approach to approximate the posterior distribution over the latent function values, which allows for efficient inference and prediction. The paper also discusses the use of different covariance functions and hyperparameters to model the data. Overall, the paper focuses on probabilistic modeling and inference, making it most closely related to the sub-category of Probabilistic Methods in AI.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of best-first model merging for dynamically choosing the structure of a neural architecture.   Probabilistic Methods: The paper mentions the approach being applicable to both learning and recognition tasks, which often generalizes significantly better than fixed structures. This suggests the use of probabilistic methods for model selection.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses algorithms for estimating a given measure based on a class of diffusions, which are stochastic processes. The convergence of these diffusions is analyzed using probabilistic methods such as exponential and polynomial convergence rates.   Theory: The paper presents theoretical results on the convergence of diffusions and their discretizations. It discusses the conditions under which the diffusions converge to the given measure and how the convergence rates can be improved. The paper also compares different discretization methods and their convergence rates.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of a neural network for classification of EEG signals. Specifically, a sparse polynomial builder neural network is used.   Probabilistic Methods: The paper also discusses the use of a probabilistic approach for classification, specifically the use of Bayesian decision theory. The authors mention that the neural network is trained using a maximum likelihood approach, which is a probabilistic method.
Neural Networks, Theory.   Neural Networks: The paper discusses the development of structured receptive fields in simulations using a Hebb-type synaptic plasticity rule in a feed-forward linear network. The focus is on the dynamics of the learning rule in terms of the eigenvectors of the matrix that is closely related to the covariance matrix of input cell activities.   Theory: The paper presents some general theorems regarding the properties of the eigenvectors and their eigenvalues. It also provides analytic and numerical solutions for the eigenvectors at a specific layer of Linsker's network. The analysis of the circumstances in which each eigenvector dominates yields an explanation of the emergence of certain structures. The paper also develops criteria for estimating the boundary of the parameter regime in which certain structures emerge.
Probabilistic Methods.   Explanation: The paper discusses the use of a multidimensional random walk Metropolis algorithm, which is a probabilistic method for sampling from a target density. The paper also considers the scaling of the proposal distribution in order to maximize the efficiency of the algorithm. The main result is a weak convergence result as the dimension of the target density converges to 1, which is a probabilistic concept. The paper does not discuss any other sub-categories of AI.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of neural networks to learn overcomplete representations. It describes how neural networks can be trained to learn a set of basis functions that can represent the input data in a more efficient and compact way.   Probabilistic Methods: The paper also discusses the use of probabilistic methods, such as sparse coding and independent component analysis, to learn overcomplete representations. These methods aim to find a set of basis functions that capture the statistical structure of the input data.
Neural Networks, Reinforcement Learning.   Neural Networks are present in the text as the authors have implemented a neural network architecture as the reactive component of a two layer control system for a simulated race car. They have also tested whether decomposing reactivity into separate behaviors leads to superior overall performance, coordination and learning convergence.   Reinforcement Learning is present in the text as the authors have proposed combining reactivity with planning as a means of compensating for potentially slow response times of planners while still making progress toward long term goals. They have also tested whether decomposing reactivity into separate behaviors leads to superior overall performance, coordination and learning convergence.
Rule Learning, Theory.   Rule Learning is present in the text as the paper introduces a formal model of teaching in which the teacher is tailored to a particular learner, and the teaching protocol is designed so that no collusion is possible. The paper also describes teacher/learner pairs for the classes of 1-decision lists and Horn sentences.   Theory is present in the text as the paper presents general results relating this model of teaching to various previous results and proves that any class that can be exactly identified by a deterministic polynomial-time algorithm with access to a very rich set of example-based queries is teachable by a computationally unbounded teacher and a polynomial-time learner.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper explores algorithms for automatic quantization of real-valued datasets using thermometer codes for pattern classification applications. The quantized datasets are then used to train simple perceptrons, which are a type of neural network.   Probabilistic Methods: The paper discusses a randomized thermometer code generation technique for quantization, which involves a probabilistic approach to generating the codes. Additionally, the paper evaluates the performance of the quantized datasets using statistical measures such as generalization on test data.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods: The paper discusses using a set of model constraint functions to measure how much each modeling assumption is violated. These constraint functions can be seen as probabilistic methods, as they provide a measure of uncertainty in the modeling process.  Rule Learning: The paper suggests using the values of the model constraint functions as constraint inputs to a standard constrained nonlinear optimization numerical method. This can be seen as a form of rule learning, as the system is learning how to use the model constraint functions to guide the search process.
Rule Learning, Theory.   Rule Learning is present in the text as the paper presents a logic-oriented approach to learning grounded concepts.   Theory is present in the text as the paper discusses the importance of grounding concepts in the environment through sensor data and integrating action-oriented perceptual features and perception-oriented action features.
Neural Networks, Reinforcement Learning  This paper belongs to the sub-categories of Neural Networks and Reinforcement Learning.   Neural Networks: The paper proposes a neural network-based approach to learn action-oriented perceptual features for robot navigation. The authors use a convolutional neural network (CNN) to extract features from raw sensor data, which are then used to predict the robot's actions.   Reinforcement Learning: The authors use a reinforcement learning framework to train the neural network. The robot receives a reward signal based on its performance in reaching the goal, and the neural network is updated to maximize this reward. The authors also use a technique called experience replay, where the robot's experiences are stored in a buffer and used to train the neural network in an offline manner.
Neural Networks.   Explanation: The paper discusses the application of RBF (Radial Basis Function) networks, which are a type of neural network. The focus of the paper is on improving the performance of RBF networks through feature selection, but the underlying technology being used is neural networks.
Probabilistic Methods.   Explanation: The paper discusses parameter estimation in Bayesian networks, which is a probabilistic method used in AI. The paper specifically discusses the EM algorithm and gradient projection algorithm, which are both probabilistic methods commonly used in Bayesian networks.
Reinforcement Learning, Rule Learning.   Reinforcement learning is present in the text as the paper discusses the problem solver learning to modify its motor control parameters in a continuous, on-line manner to successfully accomplish its task. This is a key aspect of reinforcement learning, where the agent learns to take actions in an environment to maximize a reward signal.   Rule learning is present in the text as the paper proposes a learning method that can compile sensorimotor experiences into continuous operators, which can then be used to improve performance of the problem solver. This can be seen as a form of rule learning, where the system learns to map input sensory information to appropriate control outputs based on past experiences.
Theory.   Explanation: The paper discusses a theoretical framework for PAC learning that uses probability distributions related to Kolmogorov complexity. It does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning. Rule learning is also not directly relevant to the paper's focus on theoretical PAC learning.
Case Based, Theory  Explanation:  The paper belongs to the sub-category of Case Based AI because it discusses the formalization of case memory systems and their learning aspects in case-based reasoning. It also explores issues related to similarity measures and the cases in the case-base.   The paper also belongs to the sub-category of Theory because it presents a formalization of the knowledge content of case memory systems, which is a necessary preliminary to more rigorous analysis of the performance of case-based reasoning systems. It also discusses the generalization of recent formalizations of case-based classification within a framework of case-base semantics.
The paper belongs to the sub-category of AI called "Knowledge Based Systems".   Explanation: The title of the paper explicitly mentions "Knowledge Based Systems", which is a sub-category of AI that deals with the development of systems that can reason and make decisions based on knowledge and rules. The paper discusses the principles and techniques used in the development of such systems, including knowledge representation, inference, and reasoning. Therefore, none of the other sub-categories of AI mentioned (Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, Rule Learning, Theory) are applicable to this paper.
Genetic Algorithms.   Explanation: The paper proposes performance measures for comparing and optimizing genetic algorithms for an optimization problem. It presents a case study in which parameters of a genetic algorithm for robot path planning were tuned and its performance was evaluated using the proposed measures. The paper does not mention any other sub-categories of AI.
Probabilistic Methods.   Explanation: The paper proposes and analyzes a distribution learning algorithm for a subclass of Acyclic Probabilistic Finite Automata (APFA). The algorithm is designed to learn distributions generated by the APFA subclass, which is a probabilistic method. The paper also evaluates the performance of the APFAs on labeled speech data, which is another example of using probabilistic methods.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses the use of neural networks for the task of classifying natural language sentences as grammatical or ungrammatical. It also analyzes the properties of various common recurrent neural network architectures and how they can possess linguistic capability.  Rule Learning: The paper investigates the extraction of rules in the form of deterministic finite state automata. It also discusses the training of neural networks to produce the same judgments as native speakers on sharply grammatical/ungrammatical data, thereby exhibiting the same kind of discriminatory power provided by linguistic frameworks such as Principles and Parameters or Government-and-Binding theory.
Neural Networks. This paper belongs to the sub-category of Neural Networks. The paper proposes a hybrid architecture of a decision tree and a neural network called the lazy neural tree (LNT) for smooth regression systems. The LNT inherits smoothness of the generated function, incremental adaptability, and conceptual simplicity from the neural network. The paper also mentions that the LNT out-performs traditional neural network simulations by the order of magnitudes in terms of efficiency. Therefore, the paper is primarily focused on the application of neural networks in regression systems.
Probabilistic Methods.   Explanation: The paper describes a system that uses a stochastic generative model and Bayesian inference for segmentation and pose estimation. These are both examples of probabilistic methods in AI.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper's title explicitly mentions "Neural Networks" as one of the topics covered in the conference proceedings. The abstract also mentions "neural networks" as a topic that will be discussed.   Probabilistic Methods: The abstract mentions "statistical models" as a topic that will be discussed, which suggests the use of probabilistic methods. Additionally, the use of statistical models often involves probabilistic assumptions and calculations.
Rule Learning, Theory.   Rule Learning is the most related sub-category as the paper discusses methods to choose arguments for a new predicate based on propositional minimisation, which is a common technique in rule learning.   Theory is also relevant as the paper proposes a theoretical framework for identifying relevant terms as arguments for a new predicate. The paper discusses the problem of choosing arguments and proposes a solution based on theoretical considerations.
Probabilistic Methods.   Explanation: The paper describes a method that takes into account the dependencies between adjacent bases and uses conditional probability matrices to locate signals in uncharacterized genomic DNA. The method computes the most likely sequence using a dynamic program, which is a probabilistic approach. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Genetic Algorithms, Probabilistic Methods.   Genetic algorithms are present in the simulation research mentioned in the paper, where populations were simulated to study the effects of directional and non-directional mate preferences.   Probabilistic methods are also present in the simulation framework presented in the paper, which allows for the simulation of a wide range of mate preferences. The paper discusses the use of probability distributions to model mate preferences and the effects of sexual selection on phenotype adaptation.
Case Based.   Explanation: The paper discusses the use of Case Based Reasoning (CBR) for technical diagnosis, which is a sub-category of AI that involves using past experiences to solve new problems. The paper also mentions the development of expert systems, which is another application of CBR. While other sub-categories of AI may be mentioned in passing, the focus of the paper is on CBR.
Neural Networks.   Explanation: The paper investigates the use of simple recurrent networks as transducers for natural language input, and introduces extensions to Elman's original recurrent network architecture. The experiments demonstrate the network's ability to process sequential input and map it to non-sequential feature-based semantics, indicating the use of neural networks for natural language processing.
This paper belongs to the sub-category of AI called Neural Networks.   Explanation: The paper discusses techniques of regularization in the context of learning and approximation using neural networks. The title of the paper also suggests a focus on neural networks, as it includes the term "approximation" which is often associated with neural network models.
Reinforcement Learning, Neural Networks.   Reinforcement Learning is the primary sub-category of AI that this paper belongs to. The authors use a connectionist network trained with reinforcement to control both an autonomous robot vehicle and a simulated robot. They show that given appropriate sensory data and architectural structure, a network can learn to control the robot for a simple navigation problem. They then investigate a more complex goal-based problem and examine the plan-like behavior that emerges.   Neural Networks is also relevant as the authors use a connectionist network to control the robot. They show that given appropriate sensory data and architectural structure, a network can learn to control the robot for a simple navigation problem.
Case Based, Reinforcement Learning  Explanation:  - Case Based: The paper proposes an architecture that applies Case-Based Reasoning to control in robotics. The experimental evaluation also compares the results to other machine learning algorithms applied to the same problem. - Reinforcement Learning: The paper mentions that the proposed architecture is experimentally evaluated on two real world domains, which suggests that it involves learning from feedback or rewards, a key characteristic of reinforcement learning.
Probabilistic Methods, Reinforcement Learning, Theory.   Probabilistic Methods: The paper discusses the use of probabilistic models to learn from drifting distributions. It mentions the use of Bayesian methods and probabilistic graphical models to handle the uncertainty in the data.  Reinforcement Learning: The paper discusses the use of reinforcement learning algorithms to learn from drifting distributions. It mentions the use of online learning and exploration-exploitation trade-offs to adapt to the changing data.  Theory: The paper presents theoretical results on the complexity of learning from drifting distributions. It discusses the sample complexity and computational complexity of various learning algorithms and provides bounds on their performance.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper uses a single (unknown) probability distribution over the domain to generate random examples for the learning algorithm and measure the speed at which the target changes.   Theory: The paper presents theoretical results on the complexity of the class H of possible targets, as measured by d, its VC-dimension, and how it affects the difficulty of tracking the target concept. The paper also shows that if the problem of minimizing the number of disagreements with a sample from among concepts in a class H can be approximated to within a factor k, then there is a simple tracking algorithm for H which can achieve a probability * of making a mistake if the target movement rate is at most a constant times * 2 =(k(d + k) ln 1 * ), where d is the Vapnik-Chervonenkis dimension of H.
Genetic Algorithms.   Explanation: The paper presents an approach to feature subset selection using a genetic algorithm, which is a randomized heuristic search technique. The paper discusses the advantages of using a genetic algorithm for this optimization problem and presents experiments demonstrating the feasibility of this approach. While the paper mentions the use of inductive learning algorithms and neural networks, these are not the main focus of the paper and are only mentioned in the context of how the choice of features affects the accuracy of the classification function. Therefore, Genetic Algorithms is the most related sub-category of AI to this paper.
Probabilistic Methods.   Explanation: The paper focuses on regression with Gaussian processes, which is a probabilistic method for predicting outcomes based on prior distributions over functions. The paper discusses Bayesian linear regression and how it can be seen as a Gaussian process predictor based on priors over functions. It also covers the use of Gaussian processes in classification problems, which is another probabilistic method. While the paper briefly mentions neural network models, it does not focus on them enough to be considered a Neural Networks paper.
Rule Learning, Theory.   The paper discusses the "anatomy" of a general learning mechanism called Chunking in the SOAR architecture. Chunking is a rule-based learning mechanism that uses explanations to generalize from specific examples. The paper also presents quantitative results on the utility of explanation-based learning, which is a theoretical approach to machine learning. Therefore, the paper belongs to the sub-categories of Rule Learning and Theory.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper discusses the feasibility of applying evolutionary methods, which includes genetic algorithms, to automatically generate controllers for physical mobile robots. The main approaches discussed in the paper involve the use of genetic algorithms to evolve controllers.  Reinforcement Learning: The paper discusses the challenges and unanswered problems in evolving controllers for physical robots, which includes the challenge of designing appropriate reward functions for reinforcement learning. The paper also mentions some promising directions for future research, such as combining reinforcement learning with other techniques.
Theory.   Explanation: The paper focuses on the theoretical analysis of mistake-driven update procedures for learning linear discriminant concepts, and introduces a new class of algorithms called quasi-additive algorithms. The paper does not discuss any specific application or implementation of these algorithms, but rather provides a general proof of convergence and a technique for proving mistake bounds. Therefore, the paper belongs to the sub-category of AI theory.
Case Based, Probabilistic Methods  Explanation:  - Case Based: The paper presents a prototype of a similarity-based retrieval system, which is a type of case-based reasoning system. The system allows for an imprecisely specified query and assesses if the retrieved items are relevant in the initial context specified in the query. - Probabilistic Methods: The paper discusses system evaluation with concerns on usefulness, scalability, applicability, and comparability. These are all aspects that can be evaluated using probabilistic methods, such as Bayesian networks or Markov models. However, the paper does not explicitly mention the use of any specific probabilistic method.
Case Based, Rule Learning  Explanation:  - Case Based: The paper is primarily focused on case-based reasoning, which is a sub-category of AI that involves solving new problems by adapting solutions from similar past cases. The paper discusses the use of inductive learning techniques to improve the performance and flexibility of a case-based reasoning system. - Rule Learning: The paper also discusses the use of inductive knowledge to improve knowledge representation in the case-based reasoning system. This involves learning rules or patterns from the data that can be used to make more accurate predictions or classifications.
Case Based.   Explanation: The paper presents a case-based reasoning system and focuses on the flexibility of the case-based reasoning process. The title also includes the phrase "case-based reasoning approach." There is no mention of genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning in the abstract.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses the use of Partially Observable Markov Decision Process (POMDP) as a framework for modeling the complex therapy decision process. POMDP is a probabilistic method that allows for modeling uncertainty in the underlying disease and patient response to treatment.  Reinforcement Learning: The paper discusses finding the optimal therapy within the POMDP framework, which is a problem that can be solved using reinforcement learning techniques. The authors also investigate approximation methods to simplify the model construction process and solve larger therapy problems faster, which is a common approach in reinforcement learning.
Probabilistic Methods.   Explanation: The paper discusses a method for deriving a consensus probability distribution over uncertain events based on the subjective probability distributions of a group of Bayesians. The method involves using a market-based approach where participants bet on securities contingent on the uncertain events, and the consensus probability of each event is defined as the corresponding security's equilibrium price. The paper also discusses how the market framework provides explicit monetary incentives for participation and honesty, and how "no arbitrage" arguments ensure that the equilibrium prices form legal probabilities. Overall, the paper is focused on probabilistic methods for pooling opinions.
Genetic Algorithms.   Explanation: The paper discusses the use of Genetic Programming, which is a sub-category of Genetic Algorithms. The paper specifically addresses the performance overheads of evolving a large number of data structures, which is a common issue in Genetic Programming. The paper proposes a solution to this problem through the use of a formally-based representation and strong typing, which is a technique commonly used in Genetic Algorithms. Therefore, this paper belongs to the sub-category of Genetic Algorithms.
Theory  Explanation: The paper discusses a domain theory and its interpretation, without involving any learning algorithms. It analyzes the accuracy of different interpretations of the theory, which demonstrates the informativeness of the theory itself. Therefore, this paper belongs to the sub-category of AI called Theory.
Probabilistic Methods.   Explanation: The paper discusses a strategy for polychotomous classification that involves estimating class probabilities for each pair of classes, and then coupling the estimates together. The coupling model is similar to the Bradley-Terry method for paired comparisons. The nature of the class probability estimates that arise is also studied. These are all characteristics of probabilistic methods in AI.
Neural Networks. The paper investigates the possibility of synaptic plasticity in both excitatory and inhibitory pathways during intracortical microstimulation (ICMS) and peripheral conditioning, which are both related to neural network function and plasticity. The paper does not discuss any other sub-categories of AI.
Theory.   Explanation: This paper is focused on discussing and analyzing the theoretical aspects of testing exogeneity of instrumental variables. It does not involve the use of any specific AI sub-category such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
This paper belongs to the sub-category of AI called Neural Networks.   Explanation: The paper proposes a partial memory incremental learning methodology that utilizes a neural network for computer intrusion detection. The authors describe the architecture of the neural network and how it is trained using the proposed methodology. They also evaluate the performance of the neural network in detecting various types of intrusions. Therefore, the paper primarily focuses on the use of neural networks for intrusion detection.
Rule Learning, Theory.   Explanation:  - Rule Learning: The paper mentions the use of "partial expert knowledge (classification rules or causal and structural dependencies between attributes)" to improve the results in the classification step. This indicates the use of rule-based techniques in the methodology. - Theory: The paper presents a methodology for discovering concepts and organizing hierarchies in ill-structured domains, which involves conceptual learning techniques and classification. This can be seen as a theoretical approach to knowledge organization and representation.
This paper belongs to the sub-category of AI called Neural Networks. Neural networks are present in the paper as the unsupervised neural networks are used for data mining to discover association rules.
Neural Networks, Theory.   Neural Networks: The paper discusses the use of a non-linear function approximator that constructs its own features, which is a characteristic of neural networks. The CLEF algorithm is also compared to C4.5, a decision tree learning algorithm, which is a common benchmark for neural network algorithms.  Theory: The paper presents a new algorithm, CLEF, and proves its ability to separate all consistently labelled training instances, even when they are not linearly separable in the input variables. This is a theoretical result that demonstrates the effectiveness of the algorithm. The paper also discusses the limitations of other classification algorithms, such as C4.5, which is a theoretical analysis of their performance.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses strategies for handling context-sensitive features in supervised machine learning, which often involves probabilistic models and techniques such as Bayesian networks and Markov models.  Rule Learning: The paper discusses heuristic strategies for handling context-sensitive features, which often involve the use of rules or decision trees to capture contextual information. The paper also mentions the work of machine learning researchers who have developed rule-based approaches to context-sensitive learning.
This paper belongs to the sub-category of AI called Neural Networks.   Explanation: The paper proposes a new type of neural network called Case Retrieval Nets (CRNs) and discusses their foundations, properties, and implementation. The paper describes how CRNs are trained using backpropagation and how they can be used for case-based reasoning. The paper also presents experimental results demonstrating the effectiveness of CRNs in solving various problems. Therefore, the paper is primarily focused on the use of neural networks for case-based reasoning.
Probabilistic Methods, Rule Learning  Probabilistic Methods: The paper uses Bayesian inference to estimate the parameters of the linear feedback models. The authors also mention the use of Markov Chain Monte Carlo (MCMC) methods for sampling from the posterior distribution.  Rule Learning: The paper describes a method for automatically discovering linear feedback models from data using a set of rules that encode the structure of the models. The rules are based on the concept of "causal influence diagrams" and are used to guide the search for the best model. The authors also mention the use of decision trees to represent the rules.
Genetic Algorithms.   Explanation: The paper specifically focuses on a new approach for handling constraints in genetic algorithm optimization problems. The term "genetic algorithm" is mentioned multiple times throughout the text, while there is no mention of any of the other sub-categories of AI listed. Therefore, this paper belongs to the Genetic Algorithms sub-category.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses the use of compact, structured representations of MDPs, such as Bayesian networks, to develop algorithms for reachability analysis. The methods produce structured descriptions of reachable states that can be used to reduce the size of the MDP and make it easier to solve.  Reinforcement Learning: The paper is concerned with the solution of Markov decision processes (MDPs), which are a common framework for modeling decision-making problems in reinforcement learning. The algorithms developed in the paper are designed to make the solution of MDPs more feasible by reducing their size and complexity.
Rule Learning.   Explanation: The paper discusses the use of ILP systems, such as GOLEM, FOIL, and MIS, which are all examples of rule learning systems. The algorithms presented in the paper are also focused on learning meta-knowledge to restrict the hypothesis space in rule learning systems. Therefore, this paper belongs to the sub-category of AI known as Rule Learning.
Probabilistic Methods, Rule Learning, Theory.   Probabilistic Methods: The paper presents new results within a Bayesian framework for learning logic programs from positive examples only. The authors show that the upper bound for expected error of a learner which maximises the Bayes' posterior probability is within a small additive term of one which does the same from a mixture of positive and negative examples.  Rule Learning: The paper discusses the learnability of logic programs from positive examples only, which falls under the category of rule learning.  Theory: The paper presents theoretical results related to the learnability of grammars and logic programs from positive examples only, and how this relates to Chomsky's theory of innate human linguistic abilities. The authors also describe an implementation of their approach and report results of testing it on artificially-generated data-sets.
Theory.   Explanation: The paper discusses theoretical results about an estimator in statistical inference and its connection with wavelet theory. There is no mention or application of any specific sub-category of AI such as case-based reasoning, neural networks, etc.
Probabilistic Methods, Rule Learning  Explanation:   The paper describes the Adaptive Simulated Annealing (ASA) algorithm, which is a probabilistic optimization method. The algorithm uses a set of rules to adapt the annealing schedule based on the performance of the current solution. This can be seen as a form of rule learning, where the algorithm learns from its own experience to improve its performance. Therefore, the paper belongs to the sub-categories of Probabilistic Methods and Rule Learning.
Theory.   Explanation: This paper presents an algorithm for determining the largest possible number of leaves in an agreement subtree of two evolutionary trees. The paper does not involve any application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. Instead, it focuses on theoretical analysis and algorithm design. Therefore, the paper belongs to the sub-category of AI theory.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses the synthesis, optimization, and analysis of a neural network for an ECG patient monitoring task. The network was optimized over a set of normal and abnormal heartbeats, and the classification error rate was reduced by a factor of 2. The weights and unit activations of the optimized network were analyzed to reduce the size of the network without loss of accuracy.  Rule Learning: The neural network was synthesized from a rule-based classifier, indicating the use of rule learning techniques in the development of the network.
Neural Networks, Theory.   Neural Networks: The paper discusses the use of the "EXIN" (afferent excitatory and lateral inhibitory) learning rules to model RF changes during ICMS. This is a type of neural network model.  Theory: The paper presents a theoretical model (the EXIN model) to explain the observed effects of ICMS on RF topography. It also discusses the possible role of inhibitory learning in producing these effects and compares ICMS to other forms of conditioning and lesioning.
This paper belongs to the sub-category of AI called Neural Networks. This is because the paper discusses the use of machine learning, which is a subset of AI, and specifically mentions the acceleration of the machine learning process through the use of additional techniques. Neural networks are a type of machine learning algorithm that can be used to accelerate the learning process.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper presents a detailed analysis of the evolution of GP populations using the MAX problem, which is a classic problem in genetic programming. The paper discusses the use of crossover and program size restrictions in GP, which are key components of genetic algorithms.  Theory: The paper presents theoretical models and experimental evidence related to the behavior of GP populations. The paper discusses Price's Covariance and Selection Theorem, which is a theoretical result in evolutionary biology that has been applied to genetic algorithms. The paper also discusses the covariance between gene frequency and fitness in the first few generations of GP runs, which is a theoretical concept in evolutionary biology.
Probabilistic Methods.   Explanation: The paper presents a probabilistic calculus that combines both probabilistic and causal information to produce probabilistic statements about the effect of actions and the impact of observations. The calculus includes conditioning operators that allow for both ordinary Bayes conditioning and causal conditioning. The paper does not discuss case-based reasoning, genetic algorithms, neural networks, reinforcement learning, rule learning, or theory.
Genetic Algorithms, Neural Networks, Theory.   Genetic Algorithms: The paper presents a comparison between a traditional GA-based function optimizer and the proposed cooperative coevolutionary approach. It also suggests ways in which the performance of GA and other EA-based optimizers can be improved.  Neural Networks: The paper suggests a new approach to evolving complex structures such as neural networks and rule sets.  Theory: The paper presents a general model for the coevolution of cooperating species, which is instantiated and tested in the domain of function optimization. The results are encouraging in two respects, suggesting ways to improve the performance of GA and other EA-based optimizers, and proposing a new approach to evolving complex structures.
Probabilistic Methods, Reinforcement Learning, Case Based  Probabilistic Methods: The paper discusses the use of probabilistic models for lifelong learning, such as Bayesian methods for updating prior knowledge based on new data.  Reinforcement Learning: The paper mentions the use of reinforcement learning for lifelong learning, specifically in the context of learning to recognize objects.  Case Based: The paper describes the use of case-based reasoning for lifelong learning, where knowledge from previous learning tasks is used to inform future tasks.
Rule Learning, Theory.   Rule Learning is present in the text as the paper discusses the use of rules and knowledge in inductive learning. The authors argue that incorporating prior knowledge and rules can improve the accuracy and efficiency of inductive learning algorithms.   Theory is also present in the text as the paper discusses the theoretical underpinnings of inductive learning and the role of knowledge in this process. The authors draw on existing theories of learning and cognition to support their arguments about the utility of knowledge in inductive learning.
Theory.   Explanation: The paper presents a theoretical analysis of the universal algorithm of Cover for the constant rebalanced portfolio problem, and extends it to the case of fixed percentage transaction costs. The paper also presents a randomized implementation that is faster in practice. The paper then explains how these algorithms can be applied to other problems, such as combining the predictions of statistical language models, where the resulting guarantees are more striking. There is no mention or application of Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning in the text.
Neural Networks.   Explanation: The paper discusses the design and performance comparison of two types of recurrent neural networks - fully connected recurrent network (FRN) and ring-structure recurrent network (RRN). The paper also mentions the use of Real Time Recurrent Learning (RTRL) method for on-line training of FRN. Therefore, the paper belongs to the sub-category of Neural Networks in AI.
Probabilistic Methods.   Explanation: The paper discusses the use of Bayesian networks to model object interactions and find the most probable explanation for a given scene. Bayesian networks are a probabilistic method commonly used in AI for reasoning under uncertainty. The paper does not mention any other sub-categories of AI.
Probabilistic Methods.   Explanation: The paper describes a practical implementation of Bayesian learning using Monte Carlo methods. Bayesian learning is a probabilistic approach to machine learning that involves updating prior beliefs based on new evidence. Monte Carlo methods are a class of probabilistic algorithms that use random sampling to approximate complex calculations. Therefore, this paper belongs to the sub-category of Probabilistic Methods in AI.
Theory  Explanation: The paper discusses scheduling algorithms for processors with lookahead, which falls under the category of theoretical computer science. The paper does not utilize any specific AI techniques such as neural networks or genetic algorithms, but rather focuses on the theoretical analysis of scheduling algorithms. Therefore, the paper belongs to the sub-category of Theory.
Theory. This paper belongs to the sub-category of AI known as Theory. The paper discusses different theories of rational inference and how they conflict with each other. It also adapts formal results from social choice theory to prove that every universal theory of default reasoning will violate at least one reasonable principle of rational reasoning. There is no mention of any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Probabilistic Methods.   Explanation: The paper discusses the application of the exponential weight algorithm, which is a probabilistic method, to the problem of predicting a binary sequence. The authors also mention the Bayes algorithm with Jeffrey's prior, which is another probabilistic method. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Theory  Explanation: The paper primarily focuses on the theoretical connections between game theory, on-line prediction, and boosting. While it does mention algorithms and methods used in these areas, the main emphasis is on the underlying theory and how these concepts are related.
Theory.   Explanation: The paper presents a new program representation that combines different types of information to enable better integration of optimization phases. While the paper does not explicitly use any AI techniques, it does contribute to the theoretical foundations of optimizing compilers, which is a sub-category of AI theory.
Genetic Algorithms, Rule Learning.   Genetic Algorithms are present in the paper as the authors use a genetic algorithm to evolve control structures. They state that "the genetic algorithm is used to evolve a population of programs that use automatically defined macros to create control structures."   Rule Learning is also present in the paper as the authors use a set of rules to define the behavior of the evolved control structures. They state that "the rules define the behavior of the control structures and are used to evaluate the fitness of the evolved programs."
Neural Networks, Genetic Algorithms.   Neural Networks: The paper models agents as connectionist networks and uses real-valued activations for communication.   Genetic Algorithms: The paper mentions the use of an evolutionary program, GNARL, for coevolving a communication scheme over continuous channels.
Genetic Algorithms.   Explanation: The paper discusses the implementation of parallel Genetic Programming (GP) on a SIMD system, and explores the challenges and solutions involved in parallel evaluation of different S-expressions. The paper also mentions the use of a specified set of functions as the "instruction set" for GP, which is a key characteristic of Genetic Algorithms.
Genetic Algorithms, Reinforcement Learning  The paper belongs to the sub-categories of Genetic Algorithms and Reinforcement Learning.   Genetic Algorithms: The paper discusses the effects of group formation on evolutionary search, which is a key concept in genetic algorithms. The authors use a genetic algorithm to simulate the evolution of groups and analyze the results.  Reinforcement Learning: The paper also discusses the use of reinforcement learning to optimize the performance of the groups. The authors use a fitness function to evaluate the performance of the groups and use reinforcement learning to adjust the parameters of the algorithm to improve performance.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the neural pathways in the primate retina and how they process information. It describes the different types of cells and their connections, which form a complex network.   Probabilistic Methods: The paper uses statistical analysis to interpret the experimental results. It discusses the probability of certain events occurring and the significance of the findings. For example, it calculates the probability of a cone cell responding to a particular stimulus and compares it to the probability of a random response.
Theory. The paper proposes a theoretical model of superscalar processor performance that addresses the shortcomings of the current trace-driven simulation approach. The model views performance as an interaction of program parallelism and machine parallelism, which are decomposed into multiple component functions. The paper describes methods for measuring or computing these functions and combines them to provide an accurate estimate of performance. The paper does not mention any other sub-categories of AI such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the application of artificial neural networks to predict splice site locations in human pre-mRNA.   Probabilistic Methods: The paper uses a joint prediction scheme that regulates a cutoff level for splice site assignment based on the prediction of transition regions between introns and exons. The paper also examines the distribution of false splice sites and links it to a possible scenario for the splicing mechanism in vivo. The paper reports the percentage of false positive predictions and the average number of false donor and acceptor sites per true site. These are all examples of probabilistic methods.
Probabilistic Methods.   Explanation: The paper describes methods for converting plans represented in a procedural language to observation models represented as probabilistic belief networks. This involves using probabilistic methods to infer the likelihood of different plans based on uncertain and incomplete observations.
Theory. This paper presents a theoretical result about the inference of quartet splits in a binary tree. It does not involve any practical implementation or application of AI techniques such as neural networks, genetic algorithms, or reinforcement learning.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper presents a new algorithm, the Short Quartets Method, which is based on probabilistic methods for reconstructing evolutionary trees. The authors compare the statistical power of their method with other polynomial time methods, such as Neighbor-Joining and the 3-approximation algorithm by Agarwala et al.  Theory: The paper addresses a fundamental problem in biology, the construction of evolutionary trees, and presents a new algorithm that is consistent and has greater statistical power than other polynomial time methods. The authors also discuss the limitations of current methods and the potential of their approach to produce the correct topology from shorter sequences.
Theory  Explanation: The paper discusses the methodology and theoretical implications of Artificial Life research, rather than focusing on a specific sub-category of AI such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Theory.   Explanation: The paper provides a complete characterization of closed shift-invariant subspaces of L 2 (IR d ) in terms of their approximation order, without using any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. Therefore, the paper belongs to the sub-category of AI theory.
Theory.   Explanation: The paper focuses on theoretical results and provides a theorem and corollary related to learning domain-specific bias. While the paper discusses learning tasks and generalization, it does not explicitly use any of the other sub-categories of AI listed.
Probabilistic Methods.   Explanation: The paper discusses the use of a latent variable model closely related to factor analysis for determining the principal axes of a set of observed data vectors through maximum-likelihood estimation of parameters. The associated likelihood function is also discussed, and an EM algorithm is presented for estimating the principal subspace iteratively. The paper emphasizes the advantages of defining a probability density function for PCA. All of these aspects are related to probabilistic methods in AI.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper proposes a framework for modeling belief change based on the concepts of knowledge and plausibility, which are defined in probabilistic terms. The notion of prior plausibilities and conditioning are also discussed, which are key concepts in Bayesian probability theory.  Theory: The paper presents a theoretical framework for modeling belief change, discussing the properties of belief and their interaction with knowledge and plausibility. The authors also mention axiomatic characterizations of belief change, indicating a theoretical approach to the problem.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms: The paper presents a new heuristic approach called Differential Evolution for minimizing continuous space functions. This approach is based on the principles of genetic algorithms, where a population of candidate solutions is evolved over generations through selection, crossover, and mutation. The paper also mentions that the new method is inspired by the success of genetic algorithms in solving optimization problems.  Probabilistic Methods: The paper mentions that the new method is a simple and efficient adaptive scheme for global optimization over continuous spaces. The approach is based on the principles of stochastic optimization, where the candidate solutions are perturbed randomly to explore the search space. The paper also mentions that the new method is robust and easy to use, which are desirable properties of probabilistic methods.
Probabilistic Methods.   The paper discusses the concept of stimulus specificity in perceptual learning, which involves the probabilistic nature of how learning occurs for specific stimuli. The authors analyze previous experiments that have been conducted on perceptual learning and suggest that the results may be influenced by the specific stimuli used in those experiments. They propose a new experimental design that takes into account the probabilistic nature of learning and stimulus specificity. Therefore, the paper is most related to probabilistic methods in AI.
Reinforcement Learning, Probabilistic Methods.   Reinforcement Learning is present in the text as the paper discusses temporal-difference learning methods such as TD() for learning to predict the outcome of an unknown Markov chain based on repeated observations of its state trajectories.   Probabilistic Methods are also present in the text as the new algorithms presented in the paper use estimated transition probabilities to set the step-size parameters online in such a way as to eliminate the bias normally inherent in temporal-difference methods.
This paper belongs to the sub-category of Neural Networks.   Explanation: The paper proposes a gas identification system that uses a graded temperature sensor and neural net interpretation. The neural net is used to interpret the sensor data and identify the type of gas present. The authors also discuss the training of the neural net using backpropagation and the use of a hidden layer to improve the accuracy of the system. Therefore, the paper heavily relies on the use of neural networks for gas identification.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the behavior of X and Y retinal ganglion cells using a push-pull shunting model, which is a type of neural network. The model is based on the interactions between excitatory and inhibitory inputs to the ganglion cells.  Probabilistic Methods: The paper uses probabilistic methods to simulate the behavior of the ganglion cells. The authors use a stochastic model to simulate the firing of the cells, which takes into account the variability in the inputs to the cells and the intrinsic noise in the cells themselves. The authors also use statistical methods to analyze the results of their simulations.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of higher order statistical properties, such as skewness and kurtosis, to characterize non Gaussian and non Linear properties of musical signals. These statistical properties are used as features for classification, which is a common application of probabilistic methods.  Theory: The paper presents theoretical concepts related to higher order spectra, non Gaussian and non Linear properties of signals, and statistical distance measures. It also discusses the relationship between skewness and bicoherence function, which is a theoretical concept in signal processing.
Case Based.   Explanation: The paper discusses the application of the memory model of Case Retrieval Nets to distributed processing of information, specifically in the context of case-based reasoning. The focus is on how to extend the model to handle distributed cases, which is a key aspect of case-based reasoning. The other sub-categories of AI (Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, Rule Learning, Theory) are not directly relevant to the content of the paper.
Case Based, Rule Learning  Explanation:   - Case Based: The paper proposes a framework for document reuse, which involves the adaptation of previous documents for reuse in new cases. This is a typical problem-solving task that falls under the category of case-based reasoning. - Rule Learning: The framework proposed in the paper is based on an explicit representation of the illocutionary and rhetorical structure underlying documents. This representation enables the construction of documents by issuing goal-based specifications and rapidly retrieving documents with similar intentional structure, which involves the use of rules for document drafting.
Probabilistic Methods.   Explanation: The paper introduces a hierarchical mixture of latent variable models, which is a probabilistic method for data visualization. The parameters of the model are estimated using the expectation-maximization algorithm, which is a common probabilistic method for parameter estimation. The paper also discusses the use of probability distributions to model the data and the clusters/sub-clusters in the visualization.
Genetic Algorithms.   Explanation: The paper discusses the usage of Differential Evolution (DE), which is a type of evolutionary algorithm and falls under the category of Genetic Algorithms. The paper describes how DE generates new parameter vectors by adding weighted differences between population vectors, and how it selects the best vector based on its objective function value. The paper also discusses various practical variants of DE that have proven to be useful. Overall, the paper focuses on the application of a genetic algorithm for function optimization.
Rule Learning, Theory.   Explanation: The paper presents a novel application of ILP (Inductive Logic Programming) to the problem of diterpene structure elucidation from 13 C NMR spectra. ILP is a subfield of Rule Learning, which involves learning rules from examples. The paper also discusses the problem of learning classification rules from a database of peak patterns for diterpenes with known structure, which is a theoretical problem in the field of AI.
Probabilistic Methods.   Explanation: The paper discusses Naive Bayesian Learning, which is a probabilistic method used in machine learning. The paper explains how Naive Bayesian Learning works and how it can be applied to various problems. The paper also discusses the advantages and limitations of Naive Bayesian Learning. Therefore, this paper belongs to the sub-category of AI called Probabilistic Methods.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods: The paper discusses the selection of models based on their computational cost, accuracy, and precision, among other things. These factors are probabilistic in nature, and the selection process involves choosing the simplest model that meets the needs of the hillclimbing algorithm. This is a probabilistic approach to model selection.  Rule Learning: The paper describes a technique called "Gradient Magnitude Model Selection" (GMMS), which selects the simplest model that meets the needs of the hillclimbing algorithm. This is a rule-based approach to model selection, where the rules are based on the needs of the hillclimbing algorithm.
Neural Networks.   Explanation: The paper presents a new algorithm for improving neural network generalization after supervised training. The method is based on principal component analysis of the node activations of successive layers of the network. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper presents a rule-based technique for computing gradients in the presence of pathologies in the simulators.   Probabilistic Methods are also present in the text as the paper discusses the assumptions made by gradient-based numerical optimization methods and how realistic simulators tend to violate these assumptions. This implies that there is uncertainty in the simulator and the optimization process, which is a characteristic of probabilistic methods.
Case Based, Rule Learning.   The paper describes a case-based design system and applies inductive learning to form rules for selecting appropriate prototype designs.
Rule Learning, Data Mining, Machine Learning.   Rule Learning is the most related sub-category as it is explicitly mentioned in the text as the program used to uncover indicators of fraudulent behavior. Data Mining is also highly related as it is combined with constructive induction and machine learning techniques to design methods for detecting fraudulent usage of cellular telephones based on profiling customer behavior. Machine Learning is also present throughout the paper as it is used to create profilers and combine evidence from multiple profilers to generate high-confidence alarms.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper discusses the use of genetic programming, which is a type of genetic algorithm, to evolve artificial ant control programs.   Reinforcement Learning: The paper discusses the use of penalties and rewards in the fitness function, which is a key aspect of reinforcement learning. The authors note that in nature, there may be a penalty for doing the same thing as one's parents, and they experiment with adding such a penalty to the fitness function. They also replace the static fitness function with randomly generated dynamic test cases, which is another aspect of reinforcement learning.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses supervised Bayesian learning, which is a probabilistic method for classification. The authors also mention the use of probability distributions to model sensor data.  Theory: The paper discusses the concept of inductive bias, which is a theoretical concept in machine learning that refers to the assumptions made by a learning algorithm about the underlying distribution of the data. The authors also mention the use of minimum description length as a theoretical framework for model selection.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The method presented in the paper involves using linear regression to learn a weight vector for a linear function over a feature set. This involves probabilistic modeling of the relationship between the input features and the output function. Additionally, the algorithm involves constructing new features based on the joint ability of existing features to predict the error of the current hypothesis, which can be seen as a probabilistic approach to feature selection.  Rule Learning: The algorithm presented in the paper involves constructing new features by forming the product of the two features that most effectively predict the squared error of the current hypothesis. This can be seen as a rule-based approach to feature construction, where the rule is to combine the two most effective features. Additionally, the extension to the method involves selecting the specific pair of features to combine based on their joint ability to predict the hypothesis' error, which can be seen as a rule-based approach to feature selection.
Probabilistic Methods.   Explanation: The paper discusses Bayesian experimental design, which is a probabilistic method that involves using prior knowledge and probability distributions to make decisions about the design of experiments. The paper also mentions non-Bayesian design in passing, but the focus is on Bayesian methods.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods: The paper discusses the use of classification algorithms on regression tasks, which involves probabilistic methods such as mapping continuous variables into ordinal variables by grouping values into intervals and using misclassification costs to reflect the implicit ordering among the ordinal values.   Rule Learning: The paper describes a methodology for transforming a regression problem into a classification one by mapping continuous variables into ordinal variables and using misclassification costs to reflect the implicit ordering among the ordinal values. This involves creating rules for grouping values into intervals and selecting the best method through a search-based approach.
Probabilistic Methods.   Explanation: The paper discusses the Bayesian multivariate adaptive regression spline (BMARS) methodology, which is a probabilistic method for modelling nonlinear time series and financial datasets. The paper also mentions Bayesian versions of autoregressive conditional heteroscedasticity (ARCH) and generalized ARCH (GARCH) models, which are also probabilistic methods commonly used in finance.
Genetic Algorithms. This paper belongs to the sub-category of Genetic Algorithms in AI. The paper discusses the use of genetic programming to evolve teams of agents in a predator/prey environment. It explores different breeding strategies and coordination mechanisms to optimize the performance of the teams. The paper focuses on the use of genetic algorithms to evolve the teams and improve their coordination and teamwork.
The paper belongs to the sub-category of AI called "Knowledge-Based Systems". This is evident from the title of the paper, which includes the phrase "Wissensbasierte Systeme" (German for "knowledge-based systems"). This sub-category involves the use of knowledge representation and reasoning techniques to build intelligent systems that can reason about complex problems.
Theory.   Explanation: The paper presents algorithms for solving two theoretical problems in computer science: determining whether a set of species has a perfect phylogeny and triangulating a colored graph. The paper does not involve any application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Theory  Explanation: The paper presents a detailed comparative study of the performance advantages of different instruction scheduling approaches in superscalar (RISC) processors. While the study involves simulations and experiments, it does not involve the use of any specific AI techniques such as neural networks, genetic algorithms, or reinforcement learning. The paper is primarily focused on the theoretical analysis of the performance tradeoffs between different instruction scheduling approaches. Therefore, the paper belongs to the sub-category of AI called Theory.
Theory. The paper discusses the interdisciplinary research in cognitive and imaging science and proposes cognitive mechanisms deserving further study with imaging tools yet to be developed which can yield better spatial-temporal resolutions. The paper does not discuss any specific AI sub-category such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Probabilistic Methods.   Explanation: The paper discusses the use of formulas to bound the actual treatment effect in experimental studies with random treatment assignment and imperfect subject compliance. These formulas are based on probabilistic methods and provide the tightest bounds on the average treatment effect that can be inferred given the distribution of assignments, treatments, and responses. Therefore, this paper belongs to the sub-category of AI known as Probabilistic Methods.
Reinforcement Learning, Probabilistic Methods.   Reinforcement learning is present in the text as the paper discusses systems that actively choose situations from which they will learn, which is a key aspect of reinforcement learning.   Probabilistic methods are also present in the text as the paper discusses learning systems that make decisions based on uncertain or incomplete information, which is a key aspect of probabilistic methods.
Theory.   Explanation: The paper focuses on the theoretical study of the learnability of a specific class of boolean formulas, without any practical implementation or application of AI techniques such as neural networks, genetic algorithms, or reinforcement learning. The paper presents an algorithm that uses equivalence and membership queries, which are standard techniques in the theoretical study of learning algorithms. Therefore, the paper belongs to the sub-category of AI theory.
This paper belongs to the sub-category of AI called Neural Networks.   Explanation:  The paper focuses on the use of neural networks for facial recognition. The authors describe the process of extracting facial features using a neural network, and then using those features for recognition. They also discuss the use of different types of neural networks, such as feedforward and convolutional neural networks, for this task. Overall, the paper is primarily focused on the use of neural networks for facial recognition, making it most closely related to the Neural Networks sub-category of AI.
Probabilistic Methods.   Explanation: The paper discusses Bayesian theory, which is a probabilistic method that uses prior information to update beliefs about a parameter of interest. The paper specifically focuses on Bayesian optimal experimental design for the normal linear model with unknown variance.
Theory  Explanation: The paper presents a theoretical framework for software pipelining in the presence of structural hazards, using an ILP formulation. There is no mention of any specific AI techniques such as neural networks, genetic algorithms, or reinforcement learning.
Reinforcement Learning, Theory.  Reinforcement learning is the primary sub-category of AI that this paper belongs to. The paper discusses the problem of integrating planning with real-time learning and decision-making, which is a key problem in reinforcement learning. The paper also proposes a solution to this problem based on the mathematical framework of Markov decision processes and reinforcement learning.  Theory is another sub-category of AI that this paper belongs to. The paper presents a theoretical framework for multi-time models and establishes their suitability for planning and learning by virtue of their relationship to the Bellman equations. The paper also summarizes prior work on temporally abstract models and extends it from the prediction setting to include actions, control, and planning.
Case Based, Theory  Explanation:  - Case-based reasoning is the main focus of the paper, as it proposes a way to evaluate the generalization capabilities of a case-based reasoning system.  - The paper also discusses the limitations of the maxim "similar problems have similar solutions" as a generalization strategy, which can be seen as a theoretical aspect of AI.
Genetic Algorithms, Neural Networks.   Genetic algorithms are mentioned in the abstract as one of the approaches used in robotics programming. The paper proposes a method to combine hand programming and genetic algorithms, which is used to solve a complex problem with significantly fewer evaluations.   Neural networks are also mentioned in the abstract as an approach used in robotics programming. While the paper does not explicitly discuss neural networks, it proposes a method that can be used in conjunction with neural networks to improve their performance.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the Hastings and Metropolis algorithms, which are probabilistic methods used for sampling from a prescribed distribution.   Theory: The paper applies recent results in Markov chain theory to analyze the convergence rates of the algorithms. It also evaluates computable bounds on the rates of convergence.
Neural Networks.   Explanation: The paper discusses the use of neural networks in conjunction with metal oxide semiconductor gas sensors for olfaction. The authors describe how the neural network is trained to recognize specific odors based on the sensor data, and how it can be used for odor classification and identification. While other AI sub-categories may also be relevant to this topic, such as probabilistic methods or rule learning, the focus of the paper is on the use of neural networks.
Neural Networks, Probabilistic Methods, Theory.   Neural Networks: The paper discusses the link between cognitive psychology and artificial intelligence, which includes the study of neural networks and their role in cognitive computation.   Probabilistic Methods: The paper mentions the use of probabilistic models in cognitive computation, specifically in the context of Bayesian inference.   Theory: The paper discusses cognitive computation as a discipline that links together neurobiology, cognitive psychology, and artificial intelligence, which can be seen as a theoretical framework for understanding the relationship between these fields.
Theory.   Explanation: The paper is focused on providing theoretical bounds on the minimax regret in a game of assigning probabilities to future data based on past observations. It does not involve the implementation or application of any specific AI technique such as neural networks, reinforcement learning, or probabilistic methods.
Theory  Explanation: This paper presents a formal framework for constructing similarity metrics, which is a theoretical approach to measuring similarity. The paper does not discuss any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Rule Learning.   Explanation: The paper is concerned with the problem of inducing recursive Horn clauses from small sets of training examples, which is a task in rule learning. The method presented, iterative bootstrap induction, is a rule learning technique that generates simple clauses as properties of the required definition and uses them to induce the required recursive definitions. The experiments conducted in the paper also evaluate the effectiveness of the rule learning approach.
Theory.   Explanation: The paper discusses a method for curve estimation based on statistical and information-based complexity theory. It does not involve any of the other sub-categories of AI listed.
Probabilistic Methods.   Explanation: The paper discusses the use of network-based inference techniques for causal analysis in clinical experimentation, which involves partially specified networks and probabilistic inference. The paper also mentions the use of prior and posterior distributions, which are common in probabilistic methods.
Probabilistic Methods.   Explanation: The paper discusses the assumptions underlying statistical estimation and the causal assumptions that underlie structural equation models (SEM). These assumptions are related to probabilistic methods, which are used to model uncertainty and probability distributions in AI. The paper also mentions recent advances in graphical methods, which are a type of probabilistic modeling technique used in AI.
Neural Networks.   Explanation: The paper focuses on the application of Incremental Class Learning (ICL) to Handwritten Digit Recognition using a spatio-temporal representation of patterns. The approach involves freezing crucial nodes (features) in the hidden layers of a neural network after learning a category, and then using these frozen features in subsequent learning to recognize other categories. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods.   The paper discusses the use of speculation in path-oriented scheduling methods, which involves predicting important execution paths based on execution profiling or frequency estimation. The proposed method, speculative hedge, aims to minimize the penalty suffered by other paths when instructions are speculated along a path, by controlling over-speculation and eliminating unnecessary speculation that delays any path's exit. This approach involves probabilistic reasoning and decision-making based on the likelihood of different paths being taken at run time.
Probabilistic Methods.   Explanation: The paper discusses various exact algorithms for performing probabilistic inference in Bayesian belief networks, and proposes a new approach to the problem from a combinatorial optimization perspective. The focus is on finding an optimal factoring given a set of probability distributions, which is a key element of efficient probabilistic inference. While the paper does not explicitly mention other sub-categories of AI, it is clear that the main focus is on probabilistic methods for reasoning under uncertainty.
Genetic Algorithms, Neural Networks, Reinforcement Learning.   Genetic Algorithms: The paper describes using a form of genetic algorithm to evolve connection weights in the neural networks.   Neural Networks: The paper is primarily focused on simulations of neural networks that generate their own teaching input.   Reinforcement Learning: The paper discusses the evolved capacity of the networks to learn to behave efficiently in an environment, which is a key aspect of reinforcement learning.
Probabilistic Methods.   Explanation: The paper presents a formalism that uses probabilistic causal networks to evaluate counterfactual queries. The approach is based on probabilistic reasoning and deals with uncertainties inherent in the world. The paper does not mention any other sub-categories of AI.
Probabilistic Methods.   Explanation: The paper presents a method for evaluating counterfactual queries in the context of structural models, which are a type of probabilistic model commonly used in econometrics and social sciences. The method involves computing probabilities of events under different hypothetical scenarios, which is a key characteristic of probabilistic methods. The other sub-categories of AI listed (Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, Theory) are not directly relevant to the content of the paper.
Probabilistic Methods, Rule Learning  Probabilistic Methods: The paper discusses the use of probabilistic models to detect malicious membership queries and exceptions. It mentions the use of Bayesian networks and Markov models to identify patterns of behavior that are indicative of malicious activity.  Rule Learning: The paper also discusses the use of rule-based systems to detect malicious activity. It mentions the use of decision trees and association rule mining to identify patterns of behavior that are indicative of malicious activity. The paper also discusses the use of expert systems to detect and respond to malicious activity.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses theory refinement in the context of Bayesian statistics, which is a probabilistic method of belief revision. The algorithms presented in the paper are also for refinement of Bayesian networks, which is a probabilistic graphical model.  Theory: The paper is primarily focused on the problem of theory refinement, which is a task of updating a domain theory in the light of new cases. The paper discusses how this task can be done automatically or with expert assistance, and presents algorithms for refinement of Bayesian networks to illustrate the concepts. The paper also discusses the reduction of the problem to an incremental learning task, which is a theoretical approach to solving the problem.
This paper belongs to the sub-category of AI called Neural Networks. Neural networks are mentioned in the abstract as the type of artificial life simulations used to study the emergence of generalist and specialist behavior in populations of organisms. The paper discusses how the behavior and energy extracting ability of organisms can co-evolve and be co-adapted in these simulations.
Reinforcement Learning, Probabilistic Methods  The paper belongs to the sub-category of Reinforcement Learning as it discusses enhancing model-based learning for its application in robot navigation. Reinforcement learning is a type of machine learning that involves an agent learning to make decisions in an environment by receiving feedback in the form of rewards or punishments. The paper also belongs to the sub-category of Probabilistic Methods as it discusses the use of probabilistic models in robot navigation. Probabilistic methods involve using probability theory to model uncertainty in data and make predictions. The paper proposes a probabilistic model for robot navigation that takes into account uncertainty in the environment and the robot's sensors.
Theory.   Explanation: The paper is focused on the problem of theory patching, which is a type of theory revision. The authors consider both propositional and first-order domain theories and analyze the tractability of the problem based on the stability of the information contained in the theory. The paper does not involve any of the other sub-categories of AI listed in the question.
Genetic Algorithms.   Explanation: The paper explicitly mentions the use of genetic algorithms in designing control circuits for autonomous agents. The paper also discusses how the genetic algorithm balances evolutionary design and human expertise to best design these agents. While other sub-categories of AI may be involved in the implementation of the agents, the focus of the paper is on the use of genetic algorithms in the design process.
Reinforcement Learning.   Explanation: The paper discusses the development of autonomous agents using a three-stage incremental approach, with a focus on reinforcement programs (RPs) and the trainer as a particular kind of RP. The experiments conducted involve providing guidance to an autonomous robot using reinforcement learning techniques. While other sub-categories of AI may also be relevant to the development of autonomous agents, the focus of this paper is on reinforcement learning.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses the Schema Theorem for Genetic Programming, which is a concept derived from Genetic Algorithms. The paper also explores the idea of building blocks in GP, which is a concept commonly associated with Genetic Algorithms.  Theory: The paper presents a theoretical analysis of the Building Block Hypothesis for Genetic Programming and discusses its limitations and shortcomings. The paper also formulates a Schema Theorem for GP, which is a theoretical framework for understanding the behavior of GP.
Neural Networks, Theory.   Neural Networks: The paper discusses interference in neural networks and how to make them less susceptible to interference. It also analyzes sigmoidal, multi-layer perceptron (MLP) networks that employ the back-propagation learning algorithm.   Theory: The paper develops a theoretical framework consisting of measures of interference and network localization, which incorporate not only the network weights and architecture but also the learning algorithm. It also addresses a familiar misconception about single-hidden-layer sigmoidal networks.
Reinforcement Learning, Theory.   Reinforcement learning is present in the paper as the IPD/CR is an extension of the Iterated Prisoner's Dilemma with evolution, which is a classic example of a reinforcement learning problem. The players learn from their past experiences and adjust their strategies accordingly.   Theory is also present in the paper as it examines the social network methods used to identify population behaviors found within the complex adaptive system of IPD/CR. The paper analyzes the social networks of interesting populations and their evolution, which is a theoretical approach to understanding the behavior of the system.
Probabilistic Methods, Reinforcement Learning, Theory.   Probabilistic Methods: The paper uses path integrals of multivariate conditional probabilities to define Lagrangians, which are then fit using maximum likelihood estimation.   Reinforcement Learning: The paper uses Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians and to tune trading rules.   Theory: The paper presents a paradigm of statistical mechanics of financial markets (SMFM) using nonlinear nonequilibrium algorithms and derives canonical momenta to use as technical indicators in trading rules. The paper also discusses the implications of the SMFM model on market efficiency.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as the author discusses the scaling issues faced by RL researchers and proposes a solution using abstract environment models.   Theory is also relevant as the paper discusses the abstract framework afforded by the connection to dynamic programming and proves certain conditions for finding solutions to new RL tasks using simulated experience with abstract actions alone.
Rule Learning, Theory.   The paper describes a supervised learning algorithm that builds an oblivious decision tree and converts it to an Oblivious read-Once Decision Graph (OODG). This process involves using mutual information to make decisions and merging nodes at the same level of the tree. These techniques fall under the category of rule learning. The paper also discusses a new pruning mechanism that works top down starting from the root, which is a theoretical aspect of the algorithm.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper describes the use of path integrals of multivariate conditional probabilities to derive Lagrangians, which are then fit to EEG data using Adaptive Simulated Annealing (ASA) to obtain canonical momenta indicators (CMI). The CMI are used as correlates of behavioral states and give better signal recognition than the raw data.   Theory: The paper develops a statistical mechanics of neocortical interactions (SMNI) to describe large-scale properties of short-term memory and electroencephalographic (EEG) systematics. The paper stresses the necessity of including nonlinear and stochastic structures in this development. The paper also describes how the CMI may be used in source localization and calculates using previously ASA-fitted parameters in out-of-sample data. The paper provides a quantitative support for an accurate intuitive picture, portraying neocortical interactions as having common algebraic or physics mechanisms that scale across quite disparate spatial scales and functional or behavioral phenomena.
Theory.   Explanation: The paper presents theoretical results on the complexity of learning disjunctive normal form (DNF) expressions, using various models of learning such as membership queries and statistical queries. The authors use tools from Fourier analysis to prove their results. There is no mention of any practical implementation or application of AI techniques such as neural networks, genetic algorithms, or reinforcement learning.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as it proposes a new approach to model-based reinforcement learning that can handle different levels of temporal abstraction. The paper also discusses the theoretical framework of multi-time models and their relationship to Bellman equations.
Reinforcement Learning, Rule Learning, Theory.   Reinforcement Learning is present in the text as the explanation generation model is described as a goal-driven process that inter-weaves reasoning with action. This is similar to the process of reinforcement learning where an agent learns to take actions in an environment to maximize a reward signal.   Rule Learning is present in the text as the explanation generation model is described as a multi-strategy process. This implies that the system is capable of learning and applying different rules or strategies for generating explanations based on the context and goals.   Theory is present in the text as the paper discusses a novel model of explanation generation that models explanation as a goal-driven, multi-strategy, situated process inter-weaving reasoning with action. This model is based on theoretical concepts and principles of explanation generation.
This paper does not belong to any of the sub-categories of AI listed. It is a technical report on evolutionary biology and does not involve any AI techniques.
Theory.   Explanation: The paper presents a theoretical approach to designing a dynamic hybrid stabilizing controller for asymptotically controllable systems. It does not involve any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Theory.   Explanation: The paper presents a proof of a mathematical theorem related to nonlinear stabilization, using algebraic functions and Lie derivatives. There is no mention or application of any specific AI sub-category such as case-based reasoning, neural networks, or reinforcement learning.
Theory  Explanation: The paper does not discuss any specific AI techniques or algorithms, but rather presents a theoretical approach to improving software pipelining. The focus is on analyzing and optimizing the initiation interval of a loop, rather than on implementing any particular AI method.
Theory  Explanation: The paper presents a set of heuristics for reducing register requirements in modulo scheduling, which is a technique for exploiting instruction level parallelism. The paper does not use any AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. Therefore, the paper belongs to the sub-category of AI called Theory.
Theory  Explanation: The paper presents a theoretical approach to determining optimal register requirements for modulo scheduling, without using any specific AI techniques such as neural networks or genetic algorithms. The focus is on developing a method that can be used to evaluate the performance of lifetime-sensitive modulo scheduling heuristics, rather than on implementing a specific AI algorithm.
Neural Networks.   Explanation: The paper primarily focuses on improving the efficiency of layered artificial neural network algorithms through the development of software for the Ring Array Processor (RAP). The paper discusses the development of a library of assembly language routines for neural networks and an object-oriented RAP interface in C++ that allows programmers to incorporate the RAP as a computational server into their own UNIX applications. Therefore, the paper belongs to the sub-category of AI related to Neural Networks.
Genetic Algorithms.   Explanation: The paper explicitly describes the use of genetic algorithms as the primary search component for feature selection. While other sub-categories of AI may be indirectly related to the topic, genetic algorithms are the most directly relevant.
Neural Networks.   Explanation: The paper is specifically about growing neural networks, which involves the use of algorithms to add new neurons and connections to an existing network in order to improve its performance. The other sub-categories of AI listed are not directly relevant to the topic of the paper.
Theory.   Explanation: The paper focuses on a mathematical formulation and optimization objectives for a specific problem in software pipelining. While the paper does mention other heuristic methods, it does not use any AI techniques such as neural networks, genetic algorithms, or reinforcement learning. Therefore, the paper does not belong to any of the other sub-categories of AI.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the representation of knowledge in a declarative form, which can be seen as a probabilistic approach to learning and decision making. The declarative representation allows for incremental knowledge modification, which is a key feature of probabilistic methods.  Rule Learning: The paper proposes an approach that combines the advantages of declarative and procedural representations of knowledge. This approach involves learning knowledge in a declarative form and then transferring it to a procedural form tailored to the specific decision making situation. This can be seen as a form of rule learning, where decision structures are determined based on the available attributes and their relevance to the decision making situation.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper discusses the use of evolutionary algorithms, which are a type of genetic algorithm, for evolving sigma-pi neural networks.   Neural Networks: The paper specifically focuses on the effects of structural complexity of the solutions on the generalization performance of sigma-pi neural networks. The paper also describes a method for improving the generalization accuracy of these neural networks.
This paper belongs to the sub-category of AI called Neural Networks.   Explanation: The paper describes a Machine Learning library of C classes, which is a type of software that allows users to build and train neural networks. The paper discusses the implementation of various neural network architectures, such as feedforward networks and recurrent networks, and provides examples of how to use the library to solve classification and regression problems. Therefore, the paper is primarily focused on the use of neural networks for machine learning tasks.
Rule Learning, Theory.   The paper presents an interactive algorithm for learning regular grammars from positive examples and membership queries. The algorithm identifies a finite state automaton corresponding to the target grammar by searching a version-space lattice of FSAs. This approach falls under the category of rule learning, which involves inducing rules or patterns from data. The paper also discusses the theoretical aspects of regular grammar inference, including the representation of the lattice as a version-space and the conditions for the convergence of the incremental algorithm. Therefore, the paper also falls under the category of theory.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses the use of fitness-based selection in genetic algorithms, which is a key component of this sub-category of AI. The examples analysed in the paper are also based on genetic algorithms.  Theory: The paper presents a theoretical argument about the inherent tendency for solutions to grow in size when using a fixed evaluation function with a discrete but variable length representation. The authors also reference Price's Theorem, which is a theoretical result in evolutionary biology that has been applied to evolutionary computation.
Reinforcement Learning, Genetic Algorithms  Reinforcement learning is present in the text as the paper discusses the adaptation of systems in non-stationary environments with an invariant utility function. The paper suggests that an adaptive strategy employing both evolution and learning can tolerate a higher rate of environmental variation than an evolution-only strategy.  Genetic algorithms are also present in the text as the paper discusses the use of evolution in the adaptive strategy. The paper suggests that combining evolution and learning can lead to better adaptation in non-stationary environments.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper describes a neural network for reactive obstacle avoidance based on a model of classical and operant conditioning. The success of this model is then demonstrated when implemented on two real autonomous robots.   Reinforcement Learning: The paper mentions that the neural network is based on a model of classical and operant conditioning, which are both forms of reinforcement learning. The success of the model on the real robots also demonstrates the promise of self-organizing neural networks in the domain of intelligent robotics.
Genetic Algorithms, Probabilistic Methods.   Genetic algorithms are the main focus of the paper, as they are used to solve NP-complete combinatorial optimization problems. The paper explains how genetic algorithms are based on the model of organic evolution and how they are applied to the subset sum, maximum cut, and minimum tardy task problems. The paper also mentions that no problem-specific changes are required for the genetic algorithm to achieve high-quality results.  Probabilistic methods are also present in the paper, as genetic algorithms are a type of probabilistic search algorithm. The paper explains how genetic algorithms sample only a tiny fraction of the search space, yet are still able to find the global optimum within a number of runs. The paper also mentions how constraints are taken into account by incorporating a graded penalty term into the fitness function.
Neural Networks.   Explanation: The paper is specifically focused on designing a programming language for expressing dynamic neural network learning algorithms and analyzing its performance on parallel machines. The other sub-categories of AI (Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, Theory) are not directly relevant to the content of the paper.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms are present in the paper as the authors use an evolutionary approach to design robots that can adapt and improve over time. They state that "the evolutionary approach is based on the principles of natural selection and genetic inheritance" and describe how they use a genetic algorithm to evolve the robot's behavior.  Reinforcement Learning is also present in the paper as the authors use a reward-based system to train the robots. They state that "the robots are trained using a reinforcement learning algorithm" and describe how the robots receive rewards for completing tasks correctly.
Genetic Algorithms.   Explanation: The paper explicitly mentions "augmenting genetic algorithms" and "new genetic operators" for solving the quadratic assignment problem. The focus of the paper is on the application and performance of a genetic local search approach, which falls under the category of genetic algorithms in AI.
Genetic Algorithms, Hill Climbing, Theory.  Genetic Algorithms: The paper discusses the difficulty of the Ant problem for Genetic Algorithms, as the program search space is highly deceptive and requires large building blocks to be assembled before they have above average fitness.  Hill Climbing: The paper suggests that the Ant problem is difficult for hill climbing algorithms due to the rugged search space with many multiple plateaus split by deep valleys and many local and global optima.  Theory: The paper analyzes the program search space in terms of fixed length schema and characterizes it as highly deceptive, suggesting that simple solutions require large building blocks to be assembled before they have above average fitness. The paper also discusses the density of global optima and neutral networks in the program search space, which contribute to the problem of bloat.
Probabilistic Methods, Reinforcement Learning, Theory.   Probabilistic Methods: The article discusses learning complex stochastic models, which falls under the category of probabilistic methods.  Reinforcement Learning: The article mentions reinforcement learning as one of the four current directions in machine learning research.  Theory: The article discusses open problems in machine learning research, which involves theoretical analysis and understanding of the algorithms and models.
Probabilistic Methods.   Explanation: The paper discusses Fill's algorithm, which is a probabilistic method for perfect simulation in finite state space models. The paper also discusses extensions of this algorithm to other types of models.
Reinforcement Learning, Theory.   Reinforcement learning is the main sub-category of AI discussed in the paper, as the authors propose a learning algorithm for a special case of reinforcement learning where the environment can be described by a linear system. The algorithm actively explores the environment to learn an accurate model of the system faster and produces a control law that is close to optimal.   Theory is also relevant as the authors analyze the algorithm in a PAC learning framework and show that the time taken by the algorithm is polynomial in the dimension of the state-space and action-space.
Rule Learning.   Explanation: The paper discusses decision tree size biases and their impact on learning, which falls under the category of rule learning in AI. The paper specifically focuses on the complexity of concept distribution and how it affects the benefit of minimum and maximum size decision tree biases. The policy described in the paper also pertains to rule learning, as it guides the learner on what to do based on the complexity of the distribution of concepts.
Probabilistic Methods, Rule Learning  Probabilistic Methods: The paper discusses the use of probabilistic methods such as Bayesian networks and Markov models to model collective memory and predict the likelihood of certain events or information being remembered by a group.  Rule Learning: The paper also discusses the use of rule learning algorithms to extract patterns and rules from collective memory data, which can then be used to guide exploration and search for new information. The authors specifically mention the use of association rule mining and decision tree learning.
Probabilistic Methods.   Explanation: The paper describes a sound classification method based on matching higher order spectra (HOS) of acoustic signals. This method uses statistical features to classify sounds, which falls under the category of probabilistic methods in AI. The paper does not mention any other sub-categories of AI such as case-based, genetic algorithms, neural networks, reinforcement learning, rule learning, or theory.
Rule Learning, Theory.   Explanation:   The paper discusses a method for generating declarative language bias for top-down ILP systems, which falls under the sub-category of Rule Learning in AI. The authors propose a two-level approach where an expert provides abstract meta-declarations and the user declares the relationship between the meta-level and the given database to obtain a low-level declarative language bias. This approach involves the use of schemata, which are abstract specifications of the declarative language bias.   The paper also discusses the properties of the translation algorithm that generates schemata, which falls under the sub-category of Theory in AI. The authors verify several properties of the translation algorithm and apply it successfully to a few chemical domains.
Probabilistic Methods.   Explanation: The paper discusses the behavior of the distribution of GCV (Generalized Cross-Validation) smoothing parameter estimates near zero. GCV is a probabilistic method used in statistical analysis to estimate the smoothing parameter in non-parametric regression. The paper uses mathematical and statistical analysis to study the behavior of the distribution of GCV estimates near zero. Therefore, this paper belongs to the sub-category of Probabilistic Methods in AI.
Rule Learning, Theory.   Explanation: The paper discusses the problem of learning rules of high utility and introduces two new techniques for improving the utility of learned rules. The first technique involves combining EBL with inductive learning techniques to learn a better set of control rules, while the second technique involves using inductive techniques to learn approximate control rules. These techniques are synthesized in an algorithm called approximating abductive explanation based learning (AxA-EBL). The paper focuses on the theoretical aspects of these techniques and their application in several domains. Therefore, the paper belongs to the sub-categories of Rule Learning and Theory.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms: The paper specifically addresses the problem of program discovery as defined by Genetic Programming. The authors compare their hybridized approach to traditional Genetic Programming and show that it performs better in terms of fitness evaluations and success rate.   Probabilistic Methods: The paper mentions two traditional single point search algorithms: Simulated Annealing and Stochastic Iterated Hill Climbing, both of which are probabilistic methods. The authors combine these methods with a hierarchical crossover operator to create their hybridized approach. Additionally, the hill climbing component of their approach has options for generating candidate solutions using mutation or crossover, both of which involve probabilistic selection of individuals or genes.
Rule Learning, Theory.   Explanation: The paper discusses the use of Inductive Logic Programming (ILP) methods for pre-processing temporal databases to extract relationships that are intimately connected to the temporal nature of data. ILP is a rule learning method that uses predicate logic language to discover regularities in data. The paper also discusses the theoretical aspects of the problem of discovering regularities in temporal databases.
Theory.   Explanation: The paper does not discuss any specific AI techniques or algorithms, but rather provides a summary of the progress and plans of the L0 project, which aims to develop a new theoretical framework for AI. The paper discusses the need for a new approach to AI that can address the limitations of current methods, and outlines the key principles and goals of the L0 project. Therefore, the paper is most closely related to the category of Theory in AI.
Probabilistic Methods.   Explanation: The paper discusses a continuous time method of approximating a given distribution using the Langevin diffusion, which is a probabilistic method. The paper also considers conditions under which the discrete approximations to the diffusion converge, which is also related to probabilistic methods.
Genetic Algorithms.   Explanation: The paper presents an approach to the automatic generation of agents using genetic programming to evolve both the programs and the representation scheme. The approach focuses on the ability of agents to discover information about their environment, encode this information for later use, and create simple plans utilizing the stored mental models. Therefore, the paper belongs to the sub-category of AI that uses genetic algorithms.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of probability theory in reasoning about time and uncertainty. It presents a framework for representing and reasoning about temporal uncertainty using probability distributions. The authors also discuss the use of Bayesian networks and Markov decision processes for modeling and reasoning about time and probability.  Theory: The paper presents a theoretical framework for reasoning about time and probability. It discusses the underlying principles and assumptions of probabilistic reasoning and how they can be applied to temporal reasoning. The authors also discuss the limitations and challenges of using probabilistic methods for reasoning about time and uncertainty.
Reinforcement Learning, Probabilistic Methods.   Reinforcement learning is the main topic of the paper, as it discusses how reinforcement learning can be used to learn models of the world's dynamics and enable planning at different levels of abstraction.   Probabilistic methods are also mentioned, as the paper discusses how multi-time models can be used to predict what will happen, rather than when a certain event will take place. This involves probabilistic reasoning about the likelihood of different outcomes.
Probabilistic Methods.   Explanation: The paper discusses the use of smoothing spline models with correlated random errors, which is a probabilistic method commonly used in statistical modeling. The authors use a Bayesian approach to estimate the parameters of the model and discuss the use of Markov Chain Monte Carlo (MCMC) methods for inference. The paper also discusses the use of prior distributions and model selection criteria, which are common in probabilistic modeling.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of probabilistic models for multiple sequence alignment, specifically the use of Hidden Markov Models (HMMs) and their extensions. The authors propose a new optimization criterion based on the posterior probability of the alignment given the sequences and the model. They also compare their approach to other probabilistic methods such as Gibbs sampling and simulated annealing.  Theory: The paper presents a theoretical framework for designing optimization criteria for multiple sequence alignment. The authors discuss the properties of different criteria, such as their sensitivity to the choice of gap penalties and their ability to handle different types of sequences. They also provide a mathematical analysis of their proposed criterion and show that it has desirable properties such as convexity and continuity.
Rule Learning, Theory.   Explanation: The paper investigates the ECOC technique and its effectiveness when used with decision-tree learning algorithms. It provides theoretical explanations for why ECOC works, particularly in reducing the variance and correcting for bias in the learning algorithm. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, or Reinforcement Learning.
Genetic Algorithms, Rule Learning.   Genetic Algorithms: The paper describes a method for evolving programs and their control structures using a genetic algorithm. The algorithm involves creating a population of programs and control structures, evaluating their fitness, and selecting the best individuals to reproduce and create the next generation. This process is repeated until a satisfactory solution is found.   Rule Learning: The paper also discusses the use of rules to guide the evolution of programs and control structures. These rules are based on the knowledge and expertise of the programmer, and are used to constrain the search space and guide the evolution towards more desirable solutions. The paper describes how these rules can be encoded in the fitness function used by the genetic algorithm, and how they can be updated and refined over time.
Probabilistic Methods.   Explanation: The paper discusses the use of factor analysis, a statistical method for modeling the covariance structure of high dimensional data using a small number of latent variables. The paper also presents an Expectation-Maximization algorithm for fitting the parameters of a mixture of factor analyzers. Both of these techniques fall under the category of probabilistic methods, which involve modeling uncertainty and probability distributions in data.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper discusses the use of neural network models (EXIN and LISSOM) to simulate dynamic receptive field changes in primary visual cortex.  Reinforcement Learning: The paper discusses the use of the "EXIN" learning rules, which are a form of reinforcement learning, to model dynamic RF changes. The paper also compares the EXIN model with an adaptation model and the LISSOM model, both of which also involve some form of learning.
Rule Learning.   Explanation: The paper presents a bottom-up algorithm called MRI to induce logic programs from examples. The method is based on the analysis of saturations of examples and generates a path structure, which is an expression of a stream of values processed by predicates. The paper introduces the concepts of extension and difference of path structure, which are used to express recursive clauses. Therefore, the paper is primarily focused on the development of a rule learning algorithm for inducing logic programs.
Probabilistic Methods.   Explanation: The paper discusses the use of Gaussian process priors over functions for the Bayesian analysis of neural networks. Gaussian processes are a probabilistic method used for regression and classification tasks. The paper also mentions the use of Hybrid Monte Carlo, which is a probabilistic method for sampling from complex distributions.
Rule Learning, Theory.   Explanation:  - Rule Learning: The paper introduces a refinement procedure that takes a small number of refinement rules and returns rule revisions aiming to recover the consistency of the KB-theory. This is an example of rule learning, where the system learns from a set of rules and uses them to generate new rules.  - Theory: The paper discusses the use of explanations for guiding automated knowledge base refinement. This involves the development and refinement of theories about the causes of inconsistencies in the knowledge base.
Genetic Algorithms, Neural Networks.   Genetic Algorithms are mentioned in the abstract as the method used for evolving complex behavior in autonomous agents. The paper discusses the use of variable length genotypes and an encoding scheme to govern how genotypes develop into phenotypes.   Neural Networks are also mentioned in the abstract as the type of phenotype used in the experiments. The paper discusses the use of recurrent dynamical neural networks as phenotypes and how they can evolve arbitrary levels of behavioral complexity.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper introduces a neural network mobile robot controller (NETMORC) that autonomously learns the forward and inverse odometry of a differential drive robot through an unsupervised learning-by-doing cycle. The paper also describes the NETMORC architecture and its simplified algorithmic implementation.  Reinforcement Learning: The paper mentions that after an initial learning phase, the controller can move the robot to an arbitrary stationary or moving target while compensating for noise and other forms of disturbance, such as wheel slippage or changes in the robot's plant. This suggests that the controller is using reinforcement learning to adapt to changes in the environment and achieve its goal.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper develops an algorithm for simulating "perfect" random samples from the invariant measure of a Harris recurrent Markov chain. The method uses backward coupling of embedded regeneration times, which is a probabilistic method.  Theory: The paper provides explicit analytic bounds on the backward coupling times in the stochastically monotone case, which is a theoretical result. The paper also discusses the effectiveness of the algorithm for finite chains and for stochastically monotone chains even on continuous spaces, which involves theoretical considerations.
Probabilistic Methods.   Explanation: The paper discusses various algorithms for exact simulation using Markov chains, which is a probabilistic method. The Ising model and Bayesian analysis problem are also examples of probabilistic models.
Theory.   Explanation: The paper focuses on the theoretical analysis of two-stage nonlinear algorithms for identification in H1. It does not involve the implementation or application of any specific AI technique such as neural networks, genetic algorithms, or reinforcement learning. Instead, it presents mathematical proofs and theoretical results related to the convergence and robustness of the algorithms. Therefore, the paper belongs to the sub-category of AI theory.
Genetic Algorithms, Neural Networks.   Genetic algorithms are used to simulate evolution processes in order to develop neural network control systems that exhibit specialist or generalist behaviors according to the fitness formula. This is evident in the abstract and throughout the paper.   Neural networks are also a key component of the study, as they are the control systems being evolved and studied for specialist and generalist behaviors. This is also evident in the abstract and throughout the paper.
Genetic Algorithms.   Explanation: The paper explicitly mentions genetic programming and analyzes the dynamics of the evolutionary process in relation to program structure. The use of primitive sets and the discussion of tree size, height, and density are all characteristic of genetic programming. While other sub-categories of AI may also be relevant to the topic, genetic algorithms are the most closely related.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents a new connectionist method to predict the conditional probability distribution in response to an input. The network architecture is discussed and compared to other methods.  Probabilistic Methods: The paper focuses on predicting the full distribution instead of just the mean, and presents a method to transform the problem from a regression to a classification problem. The conditional probability distribution network is used to perform direct and iterated predictions, and is compared to a nearest-neighbor predictor. The paper also discusses the differences between their method and fuzzy logic.
Rule Learning, Theory.   Rule Learning is the most related sub-category as the paper presents a framework for a top-down ILP learner that uses stable models to represent the current state specified by (possibly) negative EDB and IDB rules.   Theory is also relevant as the paper presents a cross-disciplinary concept straddling machine learning and nonmonotonic reasoning, and explores the added expressivity of negation in the background knowledge.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of probability measures in decision-making and *-semantics, which accepts a default if the probability of a certain state is very small.  Theory: The paper presents a theoretical examination of the role of defaults in decision-making and proposes a concrete role for defaults in simplifying the decision-making process. It also discusses the desired properties of defaults and compares its approach with Poole's decision-theoretic defaults.
Probabilistic Methods.   Explanation: The paper discusses the asymptotic properties of estimators based on thresholding of empirical wavelet coefficients for density estimation. The study involves the minimax rates of convergence over a range of Besov function classes and global L0p error measures. The paper also mentions the use of a Gaussian white noise model in a Mallows metric. These are all probabilistic methods commonly used in non-parametric estimation.
Rule Learning, Theory.   Explanation: The paper discusses a learning algorithm that uses precepts to augment training set learning. The algorithm is based on the idea of extracting critical features from examples, which is a key aspect of rule learning. The paper also discusses the theoretical and practical limitations of training set learning, which falls under the category of theory in AI.
Probabilistic Methods, Rule Learning  The paper belongs to the sub-categories of Probabilistic Methods and Rule Learning.   Probabilistic Methods: The paper proposes an incremental learning model for commonsense reasoning that uses probabilistic graphical models to represent and reason about commonsense knowledge. The authors state that their model is based on the principles of Bayesian networks, which are a type of probabilistic graphical model.  Rule Learning: The paper also discusses the use of rules in commonsense reasoning and how their model can learn new rules incrementally. The authors state that their model can learn new rules by observing examples and generalizing from them, which is a form of rule learning. They also mention that their model can use existing rules to guide its reasoning and make predictions.
Rule Learning, Theory.   The paper discusses the attribute-value language, which is commonly used in rule-based learning systems. It also proposes a new metric for measuring similarity in such systems, which is a theoretical contribution.
Probabilistic Methods.   Explanation: The paper discusses the problem of learning approximations of distributions that generate a sequence of symbols. The approach taken is to use probabilistic methods to model the switching distributions that generate the sequence. The paper presents an efficient algorithm for solving this problem and shows conditions under which the algorithm is guaranteed to work with high probability.
Neural Networks.   Explanation: The paper discusses the use of connectionist networks, which are a type of neural network, to address the challenge of systematicity in higher level cognitive activities. The paper does not discuss any other sub-categories of AI such as case-based reasoning, genetic algorithms, probabilistic methods, reinforcement learning, rule learning, or theory.
Genetic Algorithms.   Explanation: The paper's title explicitly mentions "Genetic Bin Packing," indicating that the heuristic being discussed is based on genetic algorithms. The abstract also mentions "improved genetic bin packing," further emphasizing the use of genetic algorithms in the paper. While other sub-categories of AI may be relevant to the topic of bin packing, the focus on genetic algorithms is clear in this paper.
Genetic Algorithms - The paper describes the use of a distance metric to analyze the performance of different genetic operators on a specific problem, the 6 bit multiplexor. It also discusses the difference among individuals in a population and their relationship to run performance, which are key concepts in genetic algorithms.
Genetic Algorithms.   Explanation: The paper discusses the impact of control and data dependencies among primitives in genetic programming solutions. It presents a parameterized problem to model dependency and evaluates the effect of external dependency on the ability of genetic programming to identify and promote appropriate subprograms. The paper is focused on genetic programming and does not discuss any other sub-category of AI.
Genetic Algorithms.   Explanation: The paper explicitly mentions the use of a Genetic Algorithm for solving the Multiprocessor Scheduling Problem. The comparison is made between a serial and a parallel island model Genetic Algorithm, and the results show that the parallel island model GA with migration performs better. The paper does not mention any other sub-category of AI.
Genetic Algorithms, Neural Networks, Reinforcement Learning.   Genetic Algorithms: The paper discusses Genetic Programming (GP) as a successful evolutionary learning technique that provides powerful parameterized primitive constructs. It then introduces Neural Programming, a connectionist representation for evolving programs that maintains the benefits of GP.   Neural Networks: The paper discusses Artificial Neural Networks (ANNs) and their popularity in the machine learning community due to the gradient-descent backpropagation procedure. It then introduces Neural Programming, which is a connectionist representation for evolving programs.   Reinforcement Learning: The paper introduces an Internal Reinforcement procedure for Neural Programming, which is a feedback mechanism for the evolutionary learning system. The paper demonstrates the use of Internal Reinforcement through an illustrative experiment.
Rule Learning, Theory.   Explanation:  - Rule Learning: The paper introduces a framework for top-down induction of logical decision trees, which are a type of rule-based system. The Tilde system, which is presented and evaluated, is an implementation of this framework for inducing logical decision trees.  - Theory: The paper presents a theoretical framework for inducing logical decision trees, which are more expressive than flat logic programs typically induced by empirical inductive logic programming systems. The paper also discusses the advantages and limitations of the proposed approach and provides empirical evaluation results.
This paper belongs to the sub-category of AI called Genetic Algorithms.   Explanation: The title of the paper explicitly mentions "Genetic Algorithms" and the abstract confirms that it is an indexed bibliography of genetic algorithms from 1957-1993. There is no mention or indication of any other sub-category of AI in the text.
Genetic Algorithms.   Explanation: The paper discusses the use of evolutionary algorithms, specifically genetic algorithms, to learn and improve the crossover operator in genetic programming. The authors use a fitness function to evaluate the performance of different crossover operators and evolve them over multiple generations. The paper also discusses the use of mutation and selection operators, which are key components of genetic algorithms. Therefore, this paper belongs to the sub-category of AI known as Genetic Algorithms.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the Reduced Probabilistic Neural Network (RPNN) algorithm, which is a type of neural network.   Probabilistic Methods: The paper specifically focuses on Probabilistic Neural Networks (PNN) and proposes an algorithm to improve their center point selection. The RPNN algorithm uses probabilistic methods to select a better-than-random subset of instances to use as center points for nodes in the network.
Neural Networks, Genetic Algorithms, Reinforcement Learning.   Neural Networks: The paper proposes evolving feedforward neural networks online to create agents that improve their performance through real-time interaction. The individuals in the game world are controlled by neural networks.   Genetic Algorithms: The paper describes standard neuro-evolution, where a population of networks is evolved in the task, and the network that best solves the task is found. The paper proposes evolving the population online, which is a form of genetic algorithm.   Reinforcement Learning: The individuals in the game world learn to react to varying opponents while appropriately taking into account conflicting goals. The population improves its performance through real-time interaction, which is a form of reinforcement learning.
Neural Networks.   Explanation: The paper is specifically about predicting system loads using artificial neural networks. The authors describe the architecture of the neural network used, the training process, and the results obtained. There is no mention of any other sub-category of AI in the paper.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper is primarily focused on the study of schemata in Genetic Programming (GP), which is a subfield of Genetic Algorithms. The authors review the main results in the theory of schemata in GP and conduct experiments to study the creation, propagation, and disruption of schemata in real runs for different genetic operators.   Theory: The paper also presents a new GP schema theory based on a new definition of schema. The authors discuss the results of their experiments in the light of this new theory and draw conclusions about the behavior of schemata in GP.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of radial basis function (RBF) neural networks for high dimensional nonparametric estimation in nonlinear control. It also compares RBF networks to conventional feedforward networks with sigmoidal activation networks ("backpropagation nets").  Probabilistic Methods: The paper introduces a new statistical interpretation of radial basis functions and a new method of estimating the parameters using the EM algorithm. This new statistical interpretation allows the authors to provide confidence limits on predictions made using the networks.
Genetic Algorithms, Theory.  Explanation:  - Genetic Algorithms: The paper discusses a new form of crossover in genetic programming called one-point crossover, which is similar to the corresponding operator in genetic algorithms. The paper also presents experimental evidence comparing one-point crossover with standard crossover.  - Theory: The paper describes the theoretical properties and features of one-point crossover and its variant, strict one-point crossover. The authors also highlight the usefulness of these operators from a theoretical point of view.
Theory.   Explanation: This paper belongs to the sub-category of AI called Theory because it focuses on analyzing the intrinsic limitations of worst-case identification of LTI systems using data corrupted by bounded disturbances, and characterizing the optimal worst-case asymptotic error achievable by performing experiments using any bounded inputs and estimating the plant using any identification algorithm. The paper does not use any AI techniques such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents a connectionist architecture that can learn syntactic parsing from a corpus of parsed text. The architecture can represent syntactic constituents and learn generalizations over them.   Probabilistic Methods: The paper applies Simple Synchrony Networks to mapping sequences of word tags to parse trees. After training on parsed samples of the Brown Corpus, the networks achieve precision and recall on constituents that approaches that of statistical methods for this task.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms are present in the paper as the authors propose a two-level representation for the problem, with one level on which the evolutionary operators work. They also mention using a fitness function to find good solutions to large problem instances.   Probabilistic Methods are present in the paper as the authors use a hybrid approach that incorporates heuristics based on knowledge about air traffic control. This suggests a probabilistic approach to finding solutions.
This paper belongs to the sub-category of AI called Genetic Algorithms.   Explanation:  The title of the paper explicitly mentions the use of genetic algorithms to solve a problem in network design. Throughout the paper, the authors describe the implementation and results of their genetic algorithm approach, including the use of fitness functions, crossover and mutation operators, and population size. The paper does not mention any other sub-categories of AI, making Genetic Algorithms the only applicable choice.
Probabilistic Methods.   Explanation: The paper focuses on the Expectation-Maximization (EM) algorithm, which is a probabilistic method used for maximum-likelihood parameter estimation. The paper also discusses the application of the EM algorithm to two specific probabilistic models: mixture of Gaussian densities and hidden Markov models. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Genetic Algorithms, Neural Networks.   Genetic algorithms are the main focus of the paper, as the authors propose a new evolutionary approach called Breeder Genetic Programming (BGP) to optimize both the architecture and weights of neural networks simultaneously. The paper also discusses the use of a fitness function that incorporates the principle of Occam's razor to find minimal size networks. Neural networks are the subject of optimization in this paper, and the authors propose a method to evolve optimal neural networks using genetic algorithms.
Neural Networks. This paper belongs to the sub-category of Neural Networks in AI. The paper discusses the design and implementation of a microprocessor, SPERT, specifically for efficient execution of artificial neural network algorithms. The paper also mentions the popular error backpropagation training algorithm used in neural networks.
Genetic Algorithms.   Explanation: The paper describes a method of program discovery using a special kind of genetic algorithm, which is capable of operating on nonlinear chromosomes representing programs. The paper also introduces a new form of genetic programming called PDGP, which is based on a graph-like representation for parallel programs and uses crossover and mutation operators to manipulate the chromosomes. Therefore, the paper belongs to the sub-category of AI known as Genetic Algorithms.
Sub-categories of AI: Neural Networks, Probabilistic Methods.  Explanation:  This paper belongs to the sub-categories of Neural Networks and Probabilistic Methods.   Neural Networks: The paper uses a generative model based on a deep neural network architecture to recognize handwritten digits. The authors explain the architecture of the neural network and how it is trained to generate realistic images of handwritten digits.   Probabilistic Methods: The generative model used in the paper is based on a probabilistic framework, where the model learns the probability distribution of the input data and generates new samples based on this distribution. The authors explain how the model is trained using maximum likelihood estimation and how it can be used for digit recognition.
Genetic Algorithms.   Explanation: The paper discusses the use of genetic programming, which is a type of genetic algorithm, to solve problems. The focus is on how the fitness structure of a problem affects the acquisition of sub-solutions in genetic programming. The paper does not discuss any other sub-category of AI.
Neural Networks, Theory.   Neural Networks: The paper describes a computational model that demonstrates how neural circuits responsive to binding matches and binding errors can be rapidly formed through long-term potentiation within structures similar to the hippocampal formation, which is critical to episodic memory.   Theory: The paper presents a theoretical account of how episodic memory is formed and how neural circuits for detecting bindings and binding errors can be rapidly formed through long-term potentiation. The model also offers an alternate interpretation of the functional role of region CA3 in the formation of episodic memories and predicts the nature of memory impairment resulting from damage to various regions of the hippocampal formation.
Probabilistic Methods, Reinforcement Learning  The paper belongs to the sub-category of Probabilistic Methods because it uses Markov Models to learn harmonic progression. The authors use a Hidden Markov Model (HMM) to model the chord progression and a Markov Decision Process (MDP) to learn the optimal sequence of chords.   The paper also belongs to the sub-category of Reinforcement Learning because the authors use an MDP to learn the optimal sequence of chords. The MDP is used to model the decision-making process of a musician who is trying to create a pleasing harmonic progression. The musician receives a reward for each chord in the sequence, and the goal is to maximize the total reward. The authors use Q-learning to learn the optimal policy for the MDP.
Neural Networks, Genetic Algorithms.   Neural Networks: The paper focuses on populations of artificial neural networks and their behavior in different environments. The study analyzes the emergence of generalist and specialist behaviors in these populations.  Genetic Algorithms: The paper discusses the use of evolvable fitness formulae, which allow the evaluation measure to evolve along with the expressed behavior. This leads to co-evolution of the individual and the fitness formula. This is a key feature of genetic algorithms, where the fitness function is allowed to evolve over time.
Neural Networks. This paper belongs to the sub-category of Neural Networks as it discusses the CLONES library, which is an object-oriented library for constructing, training, and utilizing layered connectionist networks. The library includes database, network behavior, and training procedures that can be customized by the user. The primary goal of the library is to run efficiently on data parallel computers, and the secondary goal is to allow heterogeneous algorithms and training procedures to be interconnected and trained together. The paper also mentions maximizing the variety of artificial neural network algorithms that can be supported.
Case Based, Rule Learning  Explanation:  This paper belongs to the sub-category of Case Based AI because it discusses the use of case-based reasoning to find analogues for innovative design. The authors describe a system called "Analogical Design Assistant" that uses a case-based approach to retrieve and adapt relevant design cases from a case library.   Additionally, the paper also belongs to the sub-category of Rule Learning AI because it discusses the use of rule-based systems to represent design knowledge. The authors describe how they used a rule-based system to encode design knowledge in the Analogical Design Assistant, and how the system uses these rules to generate new design solutions based on the retrieved cases.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper describes Genetic Programming, which is a special kind of genetic algorithm. It also describes the crossover and mutation operators used in PDGP, which are common genetic algorithm operators.  Neural Networks: The paper describes PDGP as a method for developing parallel programs in which symbolic and neural processing elements can be combined in a free and natural way. It also mentions that the interpreter used in Genetic Programming can run neural networks.
Neural Networks.   Explanation: The paper proposes a routing methodology that uses an Artificial Neural Network (ANN) to generate control bits for optical multistage interconnection networks (OMINs). The ANN functions as a parallel computer for generating routes and can be implemented using optics, making it especially appealing for an optical computing environment. The paper discusses the advantages of using a neural network routing scheme, such as fault-tolerance and faster computation for OMINs with irregular structures. Therefore, the paper primarily belongs to the sub-category of Neural Networks in AI.
Neural Networks - This paper is primarily focused on parallelizing the Quicknet ANN library for high performance neural network training on the MultiSpert system. The algorithms used for parallelization and the resulting performance model are all related to neural networks.
Genetic Algorithms.   Explanation: The paper discusses the use of genetic algorithms to solve the distributed database allocation problem. It explains how genetic algorithms have been successful in solving combinatorial problems and presents experimental results showing the superiority of the GA over a greedy heuristic. The paper does not discuss any other sub-categories of AI.
This paper belongs to the sub-category of AI called Neural Networks.   Explanation: The title and abstract of the paper clearly indicate that the focus is on gene regulation and biological development in neural networks. The paper explores an exploratory model that uses neural networks to understand how gene regulation affects biological development. While other sub-categories of AI may also be relevant to this topic, such as genetic algorithms or probabilistic methods, the primary focus of the paper is on neural networks.
Rule Learning, Theory  The paper belongs to the sub-category of Rule Learning as it discusses the task of discovering interesting regularities in sets of data, which is a key aspect of rule learning. The paper also belongs to the sub-category of Theory as it proposes a generalized definition of the data mining task and discusses its properties and relation to the traditional concept learning problem. The paper does not relate to the other sub-categories of AI mentioned in the question.
Neural Networks.   Explanation: The paper introduces a technique that uses a time delay neural network (TDNN) to perform online training and prediction of communication patterns in order to anticipate the need for communication paths in opto-electronic reconfigurable interconnection networks. The neural network is able to learn highly repetitive communication patterns and predict the allocation of communication paths, resulting in a reduction of communication latency. Therefore, the paper belongs to the sub-category of AI known as Neural Networks.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of a time delay neural network (TDNN) predictor to reduce control latency in reconfigurable interconnection networks (INs) for shared memory multiprocessors. The TDNN is used to learn and predict repetitive memory access patterns for parallel processing applications.  Probabilistic Methods: The paper also discusses the use of a Markov predictor, which is a probabilistic method, to learn and predict memory access patterns for parallel processing applications. The Markov predictor is one of the three prediction techniques tested in the study.
Probabilistic Methods.   Explanation: The paper discusses Bayesian nonparametric regression methods, which are probabilistic methods that use Bayesian inference to estimate the posterior distribution of the model parameters. The paper also mentions simulation-based methods, which are a type of probabilistic method that use Monte Carlo simulations to estimate the posterior distribution.
Genetic Algorithms.   Explanation: The paper discusses the use of genetic algorithms to solve the distributed file and task placement problem. It explains the concept of genetic algorithms and how they have been successfully used to solve combinatorial problems. The experimental results also show the superiority of the GA over a greedy heuristic in obtaining optimal and near-optimal solutions. There is no mention of any other sub-category of AI in the paper.
Probabilistic Methods, Neural Networks, Theory.   Probabilistic Methods: The paper discusses the consistency of posterior distributions for neural networks in a Bayesian framework, which is a probabilistic method.  Neural Networks: The paper focuses on the consistency of posterior distributions for feedforward neural networks, which is a type of neural network.  Theory: The paper provides a theoretical justification for using neural networks for nonparametric regression in a Bayesian framework, which is a theoretical result. The paper also extends earlier results on universal approximation properties of neural networks to the Bayesian setting, which is another theoretical contribution.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper introduces Anytime Influence Diagrams (AIDs), which are probabilistic graphical models used for decision-making under uncertainty. The authors discuss the use of probability distributions and conditional probabilities in constructing AIDs, as well as the use of inference algorithms such as variable elimination and message passing.  Theory: The paper presents a theoretical framework for AIDs, including a formal definition and properties such as soundness and completeness. The authors also discuss the relationship between AIDs and other decision-making frameworks such as decision trees and Markov decision processes.
Case Based, Constraint Reasoning  Explanation:  - Case Based: The paper describes the integration of case-based reasoning techniques in the planning architecture. - Constraint Reasoning: The paper also mentions the use of constraint reasoning techniques for performing temporal reasoning on temporal metric information.
Rule Learning, Theory.   Explanation:  The paper discusses different methods of pruning in the context of relational concept learning, which is a subfield of rule learning. The paper also presents theoretical considerations and experimental results to compare the different pruning methods, indicating a focus on theory. There is no mention of the other sub-categories of AI listed.
Rule Learning, Theory.   Explanation:  - Rule Learning: The paper discusses pruning algorithms in the field of Inductive Logic Programming, which is a subfield of Machine Learning that focuses on learning rules from relational data. The proposed method is a top-down approach to searching for good theories, which can be seen as a form of rule learning.  - Theory: The paper introduces a new method for improving the efficiency of pruning algorithms in relational learning, which involves searching for good theories in a top-down fashion. This can be seen as a theoretical contribution to the field.
Probabilistic Methods.   Explanation: The paper proposes a dynamic data structure for updating and querying singly connected Bayesian networks, which are probabilistic models. The focus of the paper is on improving the efficiency of probabilistic reasoning, which is a core aspect of probabilistic methods in AI.
Probabilistic Methods.   Explanation: The paper discusses a probabilistic method, the localized partial evaluation (LPE) propagation algorithm, for computing interval bounds on the marginal probability of a specified query node in a belief network. The paper does not mention any other sub-categories of AI.
Probabilistic Methods, Case Based  Explanation:   Probabilistic Methods: The paper describes the use of Bayesian networks for action selection in multiagent planning tasks, specifically in the context of simulated soccer. Bayesian nets are a type of probabilistic method used for modeling uncertain relationships between variables.  Case Based: The paper also describes the use of case-based reasoning (CBR) for determining how to implement actions in multiagent planning tasks. The authors propose an integration of Bayesian networks and CBR, where the former provides environmental context and feature selection information for the latter. The paper surveys previous integrations of Bayesian and case-based approaches in the context of CBR task decomposition.
This paper belongs to the sub-categories of Genetic Algorithms and Neural Networks.   Genetic Algorithms: The paper proposes a genetic algorithm for the topological optimization of neural networks. The algorithm uses a fitness function to evaluate the performance of each network and then applies genetic operators such as crossover and mutation to generate new networks. The process is repeated until a satisfactory solution is found.   Neural Networks: The paper focuses on the topological optimization of neural networks, which involves finding the optimal structure of the network for a given task. The proposed genetic algorithm is used to search for the optimal topology by generating and evaluating different network structures. The paper also discusses the use of backpropagation for training the optimized networks.
Genetic Algorithms.   Explanation: The paper explicitly mentions "genetic algorithms" in the title and throughout the abstract. The paper discusses techniques for reducing the disruption of superior building blocks in genetic algorithms, which suggests that the focus of the paper is on improving the performance of genetic algorithms. While other sub-categories of AI may be relevant to genetic algorithms, such as neural networks or probabilistic methods, they are not mentioned in the title or abstract and do not appear to be the primary focus of the paper. Therefore, the paper belongs to the sub-category of Genetic Algorithms.
Rule Learning, Theory.   The paper discusses algorithms for connecting concepts based on their attribute-value pairs, which is a key aspect of rule learning. The paper also presents a theoretical analysis of the time complexity of the learning models, indicating a focus on theory. The other sub-categories of AI (Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning) are not directly mentioned or applicable to the content of the paper.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper analyzes the convergence properties of the canonical genetic algorithm (CGA) with mutation, crossover and proportional reproduction applied to static optimization problems. It also discusses variants of CGAs that always maintain the best solution in the population, either before or after selection.   Theory: The paper uses homogeneous finite Markov chain analysis to prove that a CGA will never converge to the global optimum regardless of the initialization, crossover operator and objective function. It also discusses the schema theorem in relation to the results.
Neural Networks.   Explanation: The paper discusses the application of Case Retrieval Nets (CRNs), which are a type of neural network, to large case bases. The results suggest that CRNs can successfully handle case bases larger than considered in other reports.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods: The paper discusses the use of machine learning techniques, which often involve probabilistic models and algorithms. For example, the paper mentions using decision trees, which are a probabilistic method for classification.  Rule Learning: The paper discusses the use of WEKA, a machine learning workbench that allows for the evaluation and comparison of different machine learning schemes. This involves the creation and application of rules for data preprocessing and model selection. Additionally, the paper discusses a specific agricultural application concerned with the culling of dairy herds, which likely involves the creation of rules for decision-making.
Theory  Explanation: The paper presents a theory of meaning for generic comparatives in order to represent decision-theoretic preferences. It does not involve any specific AI sub-category such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Genetic Algorithms, Reinforcement Learning, Theory.   Genetic Algorithms: The paper discusses the influence of learning-based plasticity on the genotypic level, which is a key concept in genetic algorithms.   Reinforcement Learning: The paper discusses the Baldwin Effect, which is a form of reinforcement learning where learned behavior influences evolutionary change.   Theory: The paper presents evidence and arguments to support the existence and importance of the Baldwin Effect, which is a theoretical concept in evolutionary biology.
Case Based, Reinforcement Learning  Explanation:  - Case Based: The paper presents a case-based method for dynamic selection and modification of behavior assemblages for a navigational system. The case-based reasoning module is designed as an addition to a traditional reactive control system. - Reinforcement Learning: The paper discusses the implementation and evaluation of the method in the ACBARR system through empirical simulation of the system on several different environments, which is a common approach in reinforcement learning.
Neural Networks.   Explanation: The paper describes a learning system that uses a network of simple nodes to derive general rules from specific examples. The nodes adapt to the problem being learned and learn important features in the input space. The learning is done without requiring user adjustment of sensitive parameters and noise is tolerated with graceful degradation in performance. These are all characteristics of neural networks, which are a sub-category of AI.
Reinforcement Learning.   Explanation: The paper discusses various Monte Carlo and temporal difference value estimation algorithms with offline updates over trials in absorbing Markov reward processes. It provides analytical expressions governing changes to the bias and variance of the lookup table estimators and develops software that serves as an analysis tool to yield an exact mean-square-error curve. The paper also illustrates classes of mean-square-error curve behavior in a variety of example reward processes and shows that the various temporal difference algorithms are quite sensitive to the choice of step-size and eligibility-trace parameters. All of these aspects are related to reinforcement learning.
Neural Networks, Machine Learning.   Neural Networks: The paper primarily focuses on the use of neural networks for natural language processing, specifically for the task of classifying sentences as grammatical or ungrammatical. The authors discuss the challenges of using neural networks for this task, including the need to handle recursive processes and symbolic computation traditionally used in linguistic frameworks. They also compare the performance of different types of neural networks and non-neural network machine learning models.  Machine Learning: The paper also falls under the sub-category of machine learning, as the authors explore various machine learning algorithms and techniques for training the neural networks and non-neural network models. They discuss the use of simulated annealing, decision trees, and nearest-neighbors algorithms, among others. The authors also highlight the importance of improving the convergence of gradient descent training algorithms for recurrent neural networks.
Theory.   Explanation: This paper presents a theoretical framework for parallel unconstrained optimization, without implementing any specific AI algorithm. The paper does not involve any case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
The paper belongs to the sub-categories of AI: Program Synthesis, Program Transformation Techniques, and Constraint Satisfaction.   Program Synthesis is present in the paper as the authors discuss the use of techniques to automatically generate programs that can simulate, optimize, and satisfy constraints. Program Transformation Techniques are also mentioned as a means of transforming existing programs to improve their performance or to meet new requirements. Constraint Satisfaction is a key focus of the paper, as the authors discuss the use of techniques to find solutions that satisfy a set of constraints.
Genetic Algorithms, Neural Networks, Reinforcement Learning.   Genetic Algorithms: The paper uses a steady-state genetic algorithm to model an evolutionary process shaping the NNets, in particular their sensors.   Neural Networks: The paper uses neural networks (NNets) to model individuals.   Reinforcement Learning: The paper considers variants where the NNets learn via reinforcement learning, and finds that reinforcement learning using a small number of crude contact sensors provides a significant advantage.
Probabilistic Methods, Reinforcement Learning, Rule Learning.   Probabilistic Methods: The bibliography includes references to papers on Bayesian networks, probabilistic graphical models, and probabilistic reasoning.  Reinforcement Learning: The bibliography includes references to papers on reinforcement learning, Q-learning, and decision making under uncertainty.  Rule Learning: The bibliography includes references to papers on decision trees, rule induction, and expert systems.
Probabilistic Methods.   Explanation: The paper proposes a Bayesian approach to MARS fitting, which involves a probability distribution over the space of possible MARS models and the use of Markov chain Monte Carlo methods to explore this distribution. This is a clear indication that the paper belongs to the sub-category of Probabilistic Methods in AI.
Rule Learning, Theory.   The paper presents a system called mer-lin for learning regular languages that represent allowed sequences of resolution steps in logic programming. This falls under the sub-category of Rule Learning, which involves learning rules or logical statements from data. The paper also discusses the theoretical aspects of learning regular languages and the limitations of using sets of resolvents to represent allowed sequences of resolution steps, which falls under the sub-category of Theory.
Probabilistic Methods.   Explanation: The paper discusses the use of probabilistic methods, specifically coupling from the past and Gibbs sampling, to produce perfect simulations of multivariate distributions with infinite or uncountable state spaces. The focus is on the probabilistic modeling and simulation of these complex systems, rather than on other sub-categories of AI such as rule learning or neural networks.
Theory.   Explanation: This paper presents a theoretical result that establishes the equivalence between null asymptotic controllability of nonlinear finite-dimensional control systems and the existence of continuous control-Lyapunov functions (clf's) defined by means of generalized derivatives. The proof relies on viability theory and optimal control techniques, which are both theoretical frameworks in control theory. There is no mention or application of any of the other sub-categories of AI listed in the question.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the problem of pattern classification and seeks to minimize the risk of predicting the classification of future examples based on previously seen examples. This involves using probability distributions to model the data and make predictions.   Theory: The paper uses mathematical analysis to derive an asymptotic characterization of the minimax risk in terms of the metric entropy properties of the class of distributions that might be generating the examples. It also uses the concept of Assouad density to characterize the minimax risk in the special case of noisy two-valued classification problems.
Genetic Algorithms.   Explanation: The paper explicitly discusses the use of genetic algorithms (GAs) and proposes a new crossover operator to improve their performance. The paper does not mention any other sub-categories of AI.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper discusses the use of a new version of cellular encoding that evolves an application-specific architecture with real-valued weights. This is a form of genetic algorithm that evolves the architecture of the neural network.  Neural Networks: The paper focuses on training neural networks for balancing poles attached to a cart on a fixed track. The learning times and generalization capabilities of the neural networks are compared for different methods, including the use of cellular encoding. The paper also discusses the architectures produced by cellular encoding, which are neural networks with specific structures and weights.
Probabilistic Methods.   Explanation: The paper discusses the computation of eigenvalues and eigenvectors for a Markov chain derived from the Independence Metropolis sampler, which is a probabilistic method used in Monte Carlo simulations. The paper also extends the result to obtain exact n-step transition probabilities, which is again a probabilistic method. The implications for diagnostic tests of convergence of Markov chain samplers also fall under the category of probabilistic methods.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the authors present a technique for automatically constructing rules that map the design goal into a reformulation chosen from a space of possible reformulations. They applied a standard inductive-learning algorithm, C4.5, to a set of training data describing which constraints are active in the optimal design for each goal encountered in a previous design session.   Probabilistic Methods are also present in the text as the authors mention that each reformulation corresponds to incorporating constraints into the search space. This suggests that there is a probability distribution over the possible reformulations, and the authors use the training data to learn which reformulation is most likely to be appropriate for a given design goal.
Theory. The paper focuses on the theoretical analysis of the time complexity of computing the maximum acyclic subgraph of a directed graph, and does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Theory.   Explanation: The paper is focused on proving a theoretical result in control theory, specifically the relationship between asymptotic controllability and feedback stabilization. The other sub-categories of AI listed (Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, Rule Learning) are not directly relevant to the content of the paper.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of a back-propagation neural network and a hybrid supervised/unsupervised neural network classifier for phonetic classification of speech tokens.  Probabilistic Methods: The paper does not explicitly mention the use of probabilistic methods, but the classification process involves assigning probabilities to different phonetic categories based on the input representation of the speech tokens.
Reinforcement Learning, Probabilistic Methods  Explanation:  This paper belongs to the sub-category of Reinforcement Learning because it discusses the use of decision-theoretic control for fully autonomous vehicles. Decision-theoretic control is a type of reinforcement learning that involves making decisions based on maximizing a reward function. The paper also belongs to the sub-category of Probabilistic Methods because it discusses the use of probabilistic models for predicting the behavior of other vehicles on the road. These models are used to inform the decision-making process of the autonomous vehicle.
Theory.   Explanation: The paper discusses the application of an Exclusive-Sum-Of-Products (ESOP) minimizer in machine learning and pattern theory. It analyzes various logic synthesis programs and proposes improvements for strongly unspecified functions. While the paper does touch on practical problems in application areas, it primarily focuses on the theoretical aspects of Boolean minimization for machine learning. Therefore, the paper belongs to the Theory sub-category of AI.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper describes the Incremental Polynomial Model-Controller Network (IPMCN), which is a network composed of controllers attached to models. The controllers are selected based on the performance of the models, and an automatic network construction algorithm is used to make the IPMCN a self-organizing non-linear controller. This network can be considered a type of neural network, as it is composed of interconnected nodes that perform computations.  Reinforcement Learning: The paper describes a closed loop reference model method to design a controller from an odd polynomial model. This method involves using a feedback loop to adjust the controller based on the performance of the system, which is a key aspect of reinforcement learning. Additionally, the paper discusses the use of local controllers capable of handling systems with complex dynamics, which is a common approach in reinforcement learning.
Theory.   Explanation: The paper offers a perspective on features and pattern finding in general, based on a complexity measure and a function decomposition algorithm. It does not focus on any specific application or implementation of AI, but rather on a theoretical approach to feature extraction and induction.
Probabilistic Methods.   Explanation: The paper discusses the problem of estimating the proportion vector that maximizes the likelihood of a given sample for a mixture of given densities. The paper adapts a framework developed for supervised learning and gives simple derivations for many of the standard iterative algorithms like gradient projection and EM. The paper also discusses the use of relative entropy and second-order Taylor expansion in the context of this problem. All of these concepts are related to probabilistic methods in AI.
Reinforcement Learning.   Explanation: The paper explicitly compares direct reinforcement learning and model-based reinforcement learning on a specific task, indicating that the focus is on reinforcement learning. The other sub-categories of AI listed (Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Rule Learning, Theory) are not mentioned or discussed in the paper.
Neural Networks, Theory  Explanation:  The paper proposes a learnability model for universal representations, which involves the use of neural networks as a tool for learning and representing knowledge. The authors also discuss the theoretical foundations of their approach, including the use of information theory and statistical learning theory. Therefore, the paper belongs to the sub-category of AI that involves Neural Networks and Theory.
Genetic Algorithms.   Explanation: The paper specifically focuses on comparing the traditional, fixed problem representation style of a genetic algorithm with a new floating representation. The study also examines the effects of non-coding segments on both of these representations, which are a computational model of non-coding DNA and mimic the location independence of genes. The paper concludes that the combination of non-coding segments and floating building blocks encourages a GA to take advantage of its parallel search and recombination abilities. Therefore, the paper is primarily related to Genetic Algorithms.
Probabilistic Methods.   Explanation: The paper extends Hoeffding bounds, which are probabilistic methods used to provide performance guarantees for classifiers. The paper also discusses how to generalize these bounds for multiple classifiers. There is no mention of any other sub-category of AI in the text.
Genetic Algorithms.   Explanation: The paper explicitly mentions the use of a co-evolutionary approach using genetic algorithms to evolve multiple individuals who can effectively cooperate to solve a common problem. The paper also describes the concurrent running of a GA for each individual in the group. Therefore, the paper belongs to the sub-category of AI known as Genetic Algorithms.
Rule Learning, Theory.   The paper discusses a recursive algorithm selection process for inductive learning, which involves selecting the best algorithm for a given learning task based on the characteristics of the data and the algorithms themselves. This process is based on a theoretical framework for algorithm selection, which is described in detail in the paper. Additionally, the paper discusses the use of decision trees as a rule learning method for inductive learning. Therefore, the sub-categories of AI that apply to this paper are Rule Learning and Theory.
Genetic Algorithms, Reinforcement Learning, Theory.   Genetic Algorithms: The paper discusses the use of two independently evolving populations (hosts and parasites) and the techniques of "shared sampling" and "hall of fame" to select and save good individuals from prior generations. These are all concepts related to genetic algorithms.  Reinforcement Learning: The paper discusses the concept of "competitive fitness sharing," which changes the way fitness is measured and can lead to an arms race between the two populations. This is a form of reinforcement learning, where the fitness function is based on direct competition and feedback from the environment.  Theory: The paper provides mathematical insights into the use of the new techniques and discusses testing issues, diversity, extinction, arms race progress measurements, and drift. These are all related to the theoretical aspects of competitive coevolution.
Neural Networks, Local Methods.   Neural Networks: The paper extensively discusses the use of the multi-layer perceptron (MLP) global neural network model for function approximation.   Local Methods: The paper also discusses the use of local approximation models, such as the single nearest-neighbour model and the linear local approximation (LA) model, for function approximation. The paper compares the performance of these local methods with the global MLP model. The paper also suggests a method for choosing between the two approaches based on the spread of the density histogram of the k-NN estimates for the training datasets.
Neural Networks.   Explanation: The paper presents an implementation of Kohonen Self-Organizing Feature Maps, which is a type of neural network. The paper discusses the performance of the implementation on various tasks and benchmarks related to neural network classification and training. There is no mention of any other sub-category of AI in the text.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper describes a system called MAGIC that uses a relaxation network to perform dynamic feature binding for image segmentation. The training procedure is a generalization of recurrent backpropagation to complex-valued units.  Probabilistic Methods: The paper does not explicitly mention probabilistic methods, but the system described, MAGIC, learns how to group features based on a set of presegmented examples. This suggests that the system is using some form of probabilistic learning to discover grouping heuristics.
Probabilistic Methods.   Explanation: The paper discusses the Simple Bayesian Classifier (SBC), which is a probabilistic method for classification. The paper explores the assumptions and conditions for the optimality of the SBC, and provides empirical evidence of its competitive performance in domains containing substantial degrees of attribute dependence.
Rule Learning.   Explanation: The paper proposes a method that uses Inductive Logic Programming to induce heuristic functions for searching goals to solve problems. The method takes solutions of a problem or a history of search and a set of background knowledge on the problem. The induced heuristics are described as rules that define a relation "better-choice" between operators and states. Therefore, the paper belongs to the sub-category of AI called Rule Learning.
Neural Networks, Reinforcement Learning  This paper belongs to the sub-categories of Neural Networks and Reinforcement Learning. Neural Networks are present in the paper as the authors use a neural network model to simulate the visuomotor coordinate transformation. Reinforcement Learning is also present in the paper as the authors use a reinforcement learning algorithm to train the neural network model to generalize to local remappings of the visuomotor coordinate transformation.
Probabilistic Methods.   Explanation: The paper describes the development of a monitoring system that uses sensor observation data to construct a probabilistic model of the world. The model is a Bayesian network incorporating temporal aspects, which is used to reason under uncertainty about both the causes and consequences of the events being monitored. The paper also discusses the use of more complex network structures to address specific monitoring problems, such as sensor validation and the Data Association Problem. There is no mention of Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory in the text.
Rule Learning, Theory.   Explanation: The paper discusses the use of decision tables as a hypothesis space for supervised learning algorithms, which falls under the category of rule learning. The paper also describes an incremental method for performing cross-validation, which is a theoretical approach to evaluating the performance of machine learning algorithms.
Probabilistic Methods, Rule Learning  Probabilistic Methods: The paper discusses the use of the Naive-Bayes induction algorithm for feature subset selection, which is a probabilistic method.  Rule Learning: The paper discusses statistical methods for feature subset selection, including forward selection, backward elimination, and their stepwise variants, which can be viewed as simple hill-climbing techniques in the space of feature subsets. These techniques are examples of rule learning. The paper also introduces compound operators that dynamically change the topology of the search space to better utilize the information available from the evaluation of feature subsets, which can be seen as a form of rule learning as well.
Neural Networks.   Explanation: The paper describes the construction and implementation of a motorized tracking system that uses a convolutional neural network to learn to track a head. The neural network is trained using real-time graphical user inputs or an auxiliary infrared detector as supervisory signals, and the inputs to the network consist of normalized luminance and chrominance images and motion information from frame differences. The paper also describes how the neural network rapidly adjusts the input weights during the online training phase, allowing the system to robustly track a head even in a cluttered background. Therefore, this paper belongs to the sub-category of AI known as Neural Networks.
Theory.   Explanation: This paper focuses on the theoretical complexity of various combinatorial problems related to graphs with bounded treewidth and given vertex or edge colorings. It does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Theory  Explanation: This paper belongs to the Theory sub-category of AI. It discusses the limitations of certain learning methods and the properties of parity mappings that make them difficult to learn. It does not focus on the practical implementation or application of AI methods, but rather on the theoretical understanding of the challenges involved in learning certain types of mappings.
Probabilistic Methods, Genetic Algorithms.   Probabilistic Methods: The paper discusses several stochastic methods for solving optimization problems, which are probabilistic in nature. Examples of such methods include stochastic greedy search methods and simulated annealing.   Genetic Algorithms: The paper also discusses genetic algorithms as a stochastic method for solving optimization problems. The authors compare the performance of genetic algorithms with simpler stochastic algorithms and special-case greedy heuristics.
Probabilistic Methods. This paper belongs to the sub-category of probabilistic methods because it discusses the use of sequential importance sampling for nonparametric Bayes models involving the Dirichlet process. The authors propose strategies to improve the performance of the sampler, which involves computing importance weights based on the likelihood of the data given the model parameters. The paper also discusses the use of Rao-Blackwellization to improve the efficiency of the estimator.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper discusses the impact of a uniform prior on hypothesis functions on the expected generalization error when early stopping is applied. It also mentions non-uniform prior on early stopping solutions, which is a probabilistic concept.  Theory: The paper presents theoretical results on the impact of early stopping on the expected generalization error and the equivalence of regularization methods with early stopping under certain conditions.
Theory.   Explanation: The paper focuses on the theoretical problem of exact learning of -DNF formulas with malicious membership queries. It does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. Therefore, the paper belongs to the sub-category of AI theory.
Theory. This paper presents a theoretical framework for analyzing the convergence and stability properties of generalized subgradient-type algorithms in the presence of perturbations. It does not involve any specific application or implementation of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Neural Networks.   Explanation: The paper discusses the use of Radial Basis Function Networks for predicting power system security margins. Radial Basis Function Networks are a type of artificial neural network, which falls under the sub-category of Neural Networks in AI. The paper does not mention any other sub-categories of AI.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper discusses the use of genetic programming primitives for solving a difficult neural network benchmark classification problem. The fitness function driving the selection changes as the population reproduces, and subproblem niches are opened, rather than crowded out.   Neural Networks: The paper discusses the use of adaptive learning agents in a fitness environment which dynamically responds to their progress. The authors solved a difficult neural network benchmark classification problem using genetic programming primitives. The solutions found have a modular structure, suggesting that crossover is better able to discover modular building blocks.
Probabilistic Methods, Reinforcement Learning, Theory.  Probabilistic Methods: The paper presents an algorithm in which the robots learn by taking random walks, and the rate at which a random walk on a graph converges to the stationary distribution is characterized by the conductance of the graph.  Reinforcement Learning: The paper presents an algorithm in which the robots learn the graph and the homing sequence simultaneously by actively wandering through the graph. This can be seen as a form of reinforcement learning, where the robots receive feedback from the environment (the graph) and adjust their behavior accordingly.  Theory: The paper presents theoretical results on the expected time complexity of the learning algorithms, and characterizes the efficiency of the random-walk algorithm in terms of the conductance of the graph. The paper also introduces a new type of homing sequence for two robots, which is a theoretical contribution.
Neural Networks.   Explanation: The paper's title and introduction make it clear that the focus is on the control and visualization of neural networks. The technical description goes into detail about the architecture and implementation of the CONVIS system, which is designed specifically for neural networks. While other sub-categories of AI may be involved in the development or application of the system, the primary focus of the paper is on neural networks.
Theory.   Explanation: The paper presents a new model for learning from examples and membership queries in situations where the boundary between positive and negative examples is ill-defined. The focus is on developing algorithms for learning in this new model, rather than on the application of specific AI techniques such as neural networks or reinforcement learning. Therefore, the paper belongs to the sub-category of AI theory.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of local multivariate binary processors for contextually guided unsupervised learning. These processors are inspired by the structure and function of neural networks in the brain.  Probabilistic Methods: The paper mentions the use of a Network Grant from the Human Capital and Mobility Programme of the European Community, which is a probabilistic method for funding research projects. Additionally, the paper discusses the use of probability distributions to model the uncertainty in the data and the parameters of the learning algorithm.
Neural Networks, Theory.   Neural Networks: The paper discusses a simple integrate-and-fire model that matches the experimentally measured integrative properties of cortical regular spiking cells. This model is a type of neural network that can simulate the behavior of cortical neurons.  Theory: The paper presents a theoretical explanation for the high interspike interval (ISI) variability displayed by visual cortical neurons. It examines the dynamics of neuronal integration and the variability in synaptic input current to understand this phenomenon. The paper also proposes a model that unifies seemingly contradictory arguments about neuronal spiking behavior.
Case Based, Rule Learning  Case Based Reasoning (CBR) is mentioned in the paper as a technique used to resolve current design issues by considering previous similar situations. This falls under the sub-category of Case Based AI.   Rule Learning is present in the system's support for defeasible and qualitative reasoning, which involves the use of rules to make decisions based on uncertain or incomplete information. This falls under the sub-category of Rule Learning AI.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper adopts an average-case setting to model the "typical" labeling of a finite automaton, while retaining a worst-case model for the underlying graph of the automaton. The paper also discusses combinatorial results for randomly labeled automata and shows that the labeling of the states and the bits of the input sequence need not be truly random, but merely semi-random.  Theory: The paper presents new and efficient algorithms for learning deterministic finite automata in an entirely passive learning model. The paper also proves a number of combinatorial results for randomly labeled automata and discusses an extension of the results to a model in which automata are used to represent distributions over binary strings.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms: The paper presents a comparison of Genetic Programming with other search algorithms, including Simulated Annealing and Stochastic Iterated Hill Climbing. The hierarchical variable length representation used in the study is a key feature of Genetic Programming.  Probabilistic Methods: Simulated Annealing and Stochastic Iterated Hill Climbing are both probabilistic search algorithms, which are compared to Genetic Programming in the paper. The authors note that it is not intuitively obvious that mutation-based adaptive search can handle program discovery, but their results show that SA and SIHC can also work for the GP problems they tested.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper discusses the performance of empirical estimators for Markov chain sampling schemes, specifically Gibbs samplers with deterministic sweep. The focus is on constructing better estimators that make use of the structure of the transition distribution of the sampler. This falls under the category of probabilistic methods.  Theory: The paper presents a theoretical analysis of the performance of the standard empirical estimator and the newly constructed estimators. The authors derive expressions for the asymptotic variance of the estimators and evaluate their performance in a simulation study. This falls under the category of theory.
Reinforcement Learning, Genetic Algorithms, Theory.   Reinforcement learning is present in the text as the paper discusses the evolution of populations of agents and their learning mechanisms. Genetic algorithms are also present as the paper describes simulations of the evolution of populations of agents. Theory is present as the paper discusses the concept of a motivation system and how it must evolve along with the behaviors it evaluates.
Neural Networks, Case Based, Rule Learning  Neural Networks: The paper discusses experiments in which connectionist learning algorithms are applied to a small corpus of Scottish Gaelic.  Case Based: The paper discusses the use of instance-based learning algorithms, which fall under the category of case-based reasoning.  Rule Learning: The paper mentions that the relation between orthography and phonology has traditionally been modelled by hand-crafted rule sets, and that machine-learning approaches offer a means to gather this knowledge automatically. The paper also discusses experiments with decision-tree learning algorithms.
Theory  Explanation: The paper proposes a theoretical approach to reducing machine descriptions while preserving scheduling constraints. There is no mention of any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Theory  Explanation: The paper proposes a theoretical approach to selecting thresholds for wavelet shrinkage estimation of the spectrum. While the paper does not explicitly use any AI techniques, it is focused on developing a theoretical framework for a statistical problem.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the use of statistical analysis and pattern recognition techniques, which are probabilistic methods, in data mining.   Rule Learning: The paper focuses on classification algorithms, which are a type of rule learning algorithm. The MLC++ system is designed to aid in the development of new algorithms, including hybrid and multi-strategy algorithms, which can be considered as variations of rule learning algorithms.
Reinforcement Learning, Neural Networks.   Reinforcement learning is the main focus of the paper, as the authors modify the Q-Learning algorithm to train a modular neural network for control. Neural networks are also a key component, as the modular neural network is used to solve the control problem.
Case-Based, Probabilistic Methods  Explanation:  - Case-Based: The title of the paper explicitly mentions "Case-Based" and the abstract mentions the use of "novel case-based reasoning systems" in the original study by Naval Air Warfare Center (NAWC). The paper also discusses the replication and extension of the NAWC study using various case-based classifiers from the machine learning literature. - Probabilistic Methods: Although not as prominent as the case-based approach, the paper does mention the testing of "several other classifiers (i.e., both case-based and otherwise) from the machine learning literature." This suggests the use of probabilistic methods, which are commonly used in many machine learning algorithms. Additionally, the paper discusses the incorporation of "additional domain-specific knowledge" when applying case-based classifiers, which could involve the use of probabilistic models to represent uncertainty or probability distributions.
Theory.   Explanation: This paper presents a theoretical result that establishes a connection between global asymptotic controllability and the existence of a continuous control-Lyapunov function for time-varying systems. There is no mention or application of any specific AI sub-category such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Case Based, Rule Learning.   This paper belongs to the sub-category of Case Based AI because it discusses case adaptation in case-based reasoning systems. It also belongs to the sub-category of Rule Learning because it proposes a method for learning adaptation knowledge in the form of adaptation strategies, which are a type of rule.
Reinforcement Learning.   Reinforcement learning is the most related sub-category of AI in this paper. The paper discusses the importance of goal-driven learning, which is a fundamental concept in reinforcement learning. The symposium brought together researchers in AI, psychology, and education to discuss goal-driven learning, which is a key concept in reinforcement learning. The paper also mentions functional arguments from machine learning that support the necessity of goal-based focusing of learner effort, which is a core idea in reinforcement learning.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents a new method for evaluating the quality and reliability of a neural network predictor. The method is used in the context of multi-variate time series prediction on financial data from the New York Stock Exchange. The paper also compares the performance of neural networks to linear models.  Probabilistic Methods: The method presented in the paper allows for forecasting a probability distribution, as opposed to the traditional case of just a single value at each time step. The paper demonstrates this on a strictly held-out test set that includes the 1987 stock market crash.
Probabilistic Methods.   Explanation: The paper presents a method for obtaining local error bars, which are estimates of the confidence in the predicted value that depend on the input. The approach is based on a maximum likelihood framework, which is a probabilistic method for estimating parameters of a statistical model. The paper also mentions normally distributed target noise, which is a probabilistic assumption commonly used in regression models. Therefore, this paper belongs to the sub-category of AI known as Probabilistic Methods.
Probabilistic Methods.   Explanation: The paper discusses the Minimax Bayes method, which is a probabilistic approach to estimation in non-parametric settings. The paper also discusses the structure of asymptotically least favorable distributions, which is a probabilistic concept. The paper does not discuss any of the other sub-categories of AI listed.
Neural Networks.   Explanation: The paper focuses on the use of unsupervised lateral-inhibition neural networks for graphical inspection of multimodality. The three projection pursuit indices compared in the paper are all related to neural networks. Therefore, this paper belongs to the sub-category of AI known as Neural Networks.
Probabilistic Methods.   Explanation: The paper discusses the use of Markov Chain Monte Carlo (MCMC) methods, which are probabilistic methods for sampling from complex distributions. The Adaptive Proposal (AP) algorithm is a variant of the random walk Metropolis algorithm, which is a type of MCMC method. The paper also presents a comprehensive test procedure and systematic performance criteria for comparing the AP algorithm with more traditional Metropolis algorithms.
Neural Networks, Probabilistic Methods, Theory.   Neural Networks: The paper discusses different types of neural networks, including Regularization Networks, Radial Basis Functions, Hyper Basis Functions, and one-hidden-layer perceptrons.   Probabilistic Methods: The paper discusses the probabilistic interpretation of regularization and how different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces.   Theory: The paper presents a theoretical framework for understanding different types of approximation schemes, including Radial Basis Functions, tensor product splines, and additive splines. It also introduces new classes of smoothness functionals that lead to different classes of basis functions.
Genetic Algorithms, Theory.   This paper belongs to the sub-category of Genetic Algorithms because it discusses the phenomenon of bloat in genetic programming and investigates the bloating characteristics of different search techniques. The paper also proposes a novel mutation operator for genetic programming.   It also belongs to the sub-category of Theory because it presents a theoretical analysis of the causes of bloat in search techniques with discrete variable length representations using simple static evaluation functions. The paper concludes that there are two causes of bloat: search operators with no length bias tend to sample bigger trees, and competition within populations favors longer programs as they can usually reproduce more accurately.
Probabilistic Methods, Case Based, Neural Networks.   Probabilistic Methods: The paper proposes a probabilistic case-space metric for case matching and adaptation tasks. The authors use a probability propagation algorithm adopted from Bayesian reasoning systems to perform probabilistic reasoning.   Case Based: The paper focuses on case-based reasoning and proposes a solution to the case matching and adaptation problems. The authors argue that using their approach, the difficult problem of case indexing can be completely avoided.   Neural Networks: The authors show how the proposed algorithm can be implemented as a connectionist network, where efficient massively parallel case retrieval is an inherent property of the system.
Genetic Algorithms, Neural Networks, Theory.   Genetic Algorithms: The paper describes how a genetic search can be improved through simple means. The authors explain how they use a genetic algorithm to search for optimal neural network architectures.   Neural Networks: The paper is primarily focused on evolving artificial neural networks. The authors describe how they use a genetic algorithm to search for optimal neural network architectures.   Theory: The paper discusses the Baldwin effect, which is a theoretical concept in evolutionary biology. The authors explain how they implement the Baldwin effect in their approach to evolving artificial neural networks.
Probabilistic Methods.   Explanation: The paper discusses nonparametric alternatives to the Cox proportional hazards model, which are based on (partition) trees and (polynomial) splines. These methods extend techniques from regression analysis to the analysis of censored survival data, and involve probabilistic modeling of the conditional hazards function. The paper compares two specific methods, Survival Trees and HARE, which both use probabilistic methods to model survival data. Therefore, the paper belongs to the sub-category of Probabilistic Methods in AI.
Theory.   Explanation: The paper focuses on the development and analysis of alternative discrete-time operators for nonlinear models. It does not involve the use of case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. The paper is primarily theoretical in nature, as it presents new mathematical operators and analyzes their properties and applications.
Probabilistic Methods.   Explanation: The paper discusses the problem of estimating a function of a probability distribution from a finite set of samples, and derives Bayes estimators for several functions of interest in statistics and information theory. The paper focuses on analytical techniques for probabilistic methods, such as the use of priors and the derivation of Bayes estimators.
Neural Networks, Theory.   Neural Networks: The paper discusses the role of receptive fields, which are a prominent computational mechanism employed by biological information processing systems, including the mammalian visual system. Receptive fields are modeled using artificial neural networks in many computer vision applications.  Theory: The paper surveys the possible computational reasons behind the ubiquity of receptive fields in vision, discussing examples of RF-based solutions to problems in vision, from spatial acuity, through sensory coding, to object recognition. The paper also discusses the organization of the mammalian visual system as retinotopic maps, which is a theoretical framework for understanding visual processing.
Neural Networks, Reinforcement Learning, Rule Learning.  Neural Networks: The paper discusses the use of local Hebbian learning, which is a type of learning used in neural networks. The authors propose an optimization of this learning method.  Reinforcement Learning: The paper mentions the use of the ffi-rule, which is a reinforcement learning algorithm. The authors propose using this rule to optimize local Hebbian learning.  Rule Learning: The paper proposes the use of the ffi-rule as a way to optimize local Hebbian learning. The ffi-rule is a type of rule learning algorithm.
Neural Networks.   Explanation: The paper describes a neural architecture called CNN (Convolutional Neural Network) that is designed to learn multiple transformations of spatial representations. The entire paper is focused on the development and evaluation of this neural network architecture, making it clear that the paper belongs to the sub-category of AI known as Neural Networks.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes a method for combining the forecasts of multiple neural networks to improve the accuracy of time series forecasting. The authors use a feedforward neural network with a single hidden layer and sigmoid activation function to model the time series data.   Probabilistic Methods: The paper uses a probabilistic approach to combine the forecasts of the neural networks. Specifically, the authors use a Bayesian framework to estimate the posterior distribution of the forecast errors and use this distribution to weight the forecasts of the individual neural networks. The authors also use a wavelet transform to decompose the time series into different frequency bands and apply the neural network forecasts to each band separately.
Probabilistic Methods.   Explanation: The paper discusses the EM algorithm, which is a probabilistic method used for maximum likelihood estimation. The paper specifically addresses the computation of the largest eigenvalue and eigenvector of the Jacobian of the EM operator, which is important for assessing convergence in iterative simulation. The power method for eigencomputation is used to efficiently and accurately estimate these quantities. There is no mention of other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods.   Explanation: The paper describes a directed acyclic graphical model that uses a probabilistic mechanism for dynamically selecting an appropriate subset of linear units to model each observation. The generative model can be viewed as a logistic belief net, which selects a skeleton linear model from among the available linear units. The paper also discusses using Gibbs sampling to learn the parameters of the linear and binary units. There is no mention of any other sub-category of AI in the text.
Theory.   Explanation: The paper describes a reduction from one learning problem to another, and analyzes the sample complexity of the resulting algorithm. This falls under the category of theoretical analysis of machine learning algorithms. The paper does not involve any of the other sub-categories listed.
Reinforcement Learning, Rule Learning  The paper belongs to the sub-category of Reinforcement Learning as it discusses the use of hierarchical reinforcement learning to learn complex tasks by breaking them down into smaller sub-tasks. The paper also belongs to the sub-category of Rule Learning as it proposes the use of procedural abstraction mechanisms to learn rules that can be used to generalize knowledge across different tasks. These rules are learned through a process of abstraction and refinement, which is similar to the process of rule learning.
Neural Networks. This paper belongs to the sub-category of Neural Networks as it discusses the possibilities of incorporating control mechanisms into connectionist networks (CN) built from large numbers of relatively simple neuron-like units. The paper explores the different kinds of control mechanisms found in various systems such as the brain, fetal development, cellular function, immune system, and social organizations that might be useful in CN. The paper also examines the absence of powerful control structures and processes in CN and suggests mechanisms for central control that CN already have built into them.
Neural Networks.   Explanation: The paper discusses the use of neural networks in drug activity prediction and compares two techniques, dynamic reposing and tangent distance, both of which involve the use of neural networks. The other sub-categories of AI (Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, Theory) are not mentioned in the text.
Case Based, Heuristic Search  Explanation:  The paper belongs to the sub-category of Case Based AI because it proposes an alternative design problem solver that integrates case-based reasoning and heuristic search techniques. The authors describe four algorithms for case-based design, which exploit both general properties of parametric design tasks and application-specific heuristic knowledge.   The paper also belongs to the sub-category of Heuristic Search AI because it proposes an alternative design problem solver that integrates heuristic search techniques. The authors describe four algorithms for case-based design, which exploit application-specific heuristic knowledge.
Genetic Algorithms, Neural Networks.   Genetic algorithms are explicitly mentioned in the abstract as a promising approach for exploring the design space of neural architectures. The paper discusses the choice of representation scheme used in evolutionary design of neural architectures (EDNA), which is a key aspect of genetic algorithms.   Neural networks are also explicitly mentioned in the abstract as the target of the evolutionary design process. The paper discusses the properties of genetic representations of neural architectures, which are the structures being evolved.
Neural Networks.   Explanation: The paper describes a parallel language designed specifically for neural algorithms, with a focus on load balancing and irregular neural networks. The language is object-centered, with nodes and connections of a graph representing the neural network. The algorithms are based on parallel local computations and communication along the connections. Therefore, the paper is primarily related to the sub-category of AI known as Neural Networks.
Reinforcement Learning, Rule Learning, Theory.   Reinforcement Learning is present in the text as the paper argues that explanation should be modeled as a goal-driven learning process.   Rule Learning is present in the text as the paper discusses the need for an active multi-strategy process for goal-driven explanation, which involves using a range of strategies to build explanations.   Theory is present in the text as the paper discusses the issues involved in developing a new model of explanation that takes into account goal-driven learning and the use of multiple strategies.
Probabilistic Methods, Theory  Probabilistic Methods: This paper belongs to the category of probabilistic methods as it proposes a model of everyday abductive explanation that involves probabilistic reasoning. The authors use Bayesian networks to represent the relationships between different variables and to calculate the probability of different hypotheses.  Theory: This paper also belongs to the category of theory as it proposes a new theoretical framework for understanding everyday abductive explanation. The authors argue that abductive explanation involves the generation of hypotheses based on prior knowledge and experience, and the evaluation of these hypotheses based on their coherence with the available evidence and their compatibility with the agent's goals. They also propose a set of principles for evaluating the plausibility of different hypotheses and for selecting the most promising ones.
Neural Networks. The paper describes a self-organizing model of orientation maps in the primary visual cortex, which is a type of neural network. The model is used to study the tilt aftereffect, a psychological phenomenon related to visual perception. The paper explains how the same self-organizing processes that are responsible for the long-term development of the map and its lateral connections also result in tilt aftereffects over short time scales in the adult. The model allows observing large numbers of neurons and connections simultaneously, making it possible to relate higher-level phenomena to low-level events, which is difficult to do experimentally. The results give computational support for the idea that direct tilt aftereffects arise from adaptive lateral interactions between feature detectors, as has long been surmised. The model thus provides a unified computational explanation of self-organization and both direct and indirect tilt aftereffects in the primary visual cortex.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses factor graphs, which are a type of graphical model used in probabilistic methods. The algorithm described in the paper is a distributed message-passing algorithm for computing marginals of a global function, which is a common technique in probabilistic inference.  Theory: The paper presents a general algorithm for computing marginals in factor graphs, which can be applied to a wide variety of specific algorithms developed in different fields. The paper also discusses how factor graphs subsume many other graphical models, which is a theoretical result.
Genetic Algorithms, Reinforcement Learning  Explanation:  This paper belongs to the sub-categories of Genetic Algorithms and Reinforcement Learning.   Genetic Algorithms: The paper uses genetic programming to evolve a time-optimal fly-to controller circuit. The authors use a genetic algorithm to evolve the controller circuit by selecting the fittest individuals from a population and applying genetic operators such as crossover and mutation to create new individuals.   Reinforcement Learning: The paper also uses reinforcement learning to train the evolved controller circuit. The authors use a reward function to evaluate the fitness of each individual in the population and use this feedback to guide the evolution process. The evolved controller circuit is then tested in a simulation environment where it learns to optimize its performance through trial and error.
Case Based, Rule Learning  Explanation:  - Case Based: The paper discusses the use of case-based reasoning and how new cases can be compared to portions of precedents to improve matching.  - Rule Learning: The paper describes a system, GREBE, that uses portions of precedents for legal analysis in the domain of Texas worker's compensation law. The system combines reasoning steps from multiple precedents to resolve new cases. This involves learning rules from past cases and applying them to new cases.
Probabilistic Methods.   Explanation: The paper proposes a model of action with probabilistic reasoning and decision analytic evaluation. The authors discuss the tradeoff between guaranteed response-time reactions and flexibility/expressiveness in designing autonomous agents that deal with time and space. The model is well-suited for tasks that require reasoning about the interaction of behaviors and events in a fixed temporal horizon, and decisions are continuously reevaluated to avoid plans becoming obsolete. The authors also discuss the tradeoffs required to guarantee a fixed response time in reasoning about nondeterministic cause-and-effect relationships, and how approximate decision making processes can be used to improve expected performance. These concepts are all related to probabilistic methods in AI.
Neural Networks.   Explanation: The paper discusses a programming model for irregular dynamic neural networks, indicating that the focus is on neural networks. No other sub-category of AI is mentioned or discussed in the paper.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper proposes a method of approximate dynamic programming for Markov decision processes based on structured problem representations using a dynamic Bayesian network.   Reinforcement Learning: The paper proposes a method for constructing value functions using decision trees as our function representation, which is a common technique in reinforcement learning. The paper also discusses the resulting approximately optimal value functions and policies, which are key concepts in reinforcement learning.
Genetic Algorithms.   Explanation: The paper describes a method for evolving programs using genetic operators, which is a key characteristic of genetic algorithms. The use of binary machine code and the manipulation of individuals in binary representation are also typical of genetic algorithms. The paper does not mention any other sub-categories of AI.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper develops an approach to addressing the central question using probability theory.   Reinforcement Learning: The paper considers the importance of exploration to game-playing programs which learn by playing against opponents. The two different learning methods implemented in the experiments are both forms of reinforcement learning.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses the insertion of symbolic knowledge into neural networks, refinement of prior knowledge in its neural representation, and extraction of refined symbolic knowledge. It also reviews the research of several groups in this area.  Rule Learning: The paper discusses the extraction of refined symbolic knowledge from neural networks, which involves the extraction of rules. The KBANN algorithm is also mentioned, which is a rule extraction algorithm for neural networks.
Neural Networks.   Explanation: The paper describes a distributed neural network model called SPEC for processing sentences with recursive relative clauses. The model is based on separating the tasks of segmenting the input word sequence into clauses, forming the case-role representations, and keeping track of the recursive embeddings into different modules. The system needs to be trained only with the basic sentence constructs, and it generalizes not only to new instances of familiar relative clause structures, but to novel structures as well. The ability to process structure is largely due to a central executive network that monitors and controls the execution of the entire system.
Reinforcement Learning.   Explanation: The paper proposes a memory-based Q-learning algorithm for adaptive traffic control, which is a type of reinforcement learning. The paper discusses the limitations of Q-routing and proposes a solution that keeps the best experiences learned and reuses them by predicting the traffic trend. The effectiveness of the proposed algorithm is verified through simulations. Therefore, the paper belongs to the sub-category of Reinforcement Learning in AI.
Reinforcement Learning.   Explanation: The paper proposes a novel dual reinforcement learning approach for adapting a signal predistorter on-line while the system is performing. The approach involves two predistorters at each end of the communication channel co-adapting using the output of the other predistorter to determine their own reinforcement. The paper also discusses the success of the system in compensating for distortions using reinforcement learning. Therefore, the paper belongs to the sub-category of Reinforcement Learning in AI.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of neural networks for predicting foreign exchange rates. It also describes the process of pruning and modifying the network to improve its performance.  Probabilistic Methods: The paper discusses obtaining conditional densities for the output, which is a characteristic of probabilistic methods. It also mentions the statistical foundation of clearning, which involves probabilistic reasoning.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of neural networks for predicting foreign exchange rates and introduces the idea of clearning, which is a modification of the standard neural network approach. The paper also describes the use of pruning to obtain a smaller network.  Probabilistic Methods: The paper discusses how to obtain both point predictions and conditional densities for the output, which is a probabilistic approach. The paper also discusses the statistical foundation of clearning, which involves modeling the noise in the data.
Reinforcement Learning, Neural Networks  The paper belongs to the sub-categories of Reinforcement Learning and Neural Networks.   Reinforcement Learning is present in the paper as the authors propose a lifelong learning framework for robots that uses reinforcement learning to continuously learn from new experiences. The paper discusses how the robot's policy is updated based on the rewards received from the environment, and how the robot can learn to adapt to new tasks and environments over time.  Neural Networks are also present in the paper as the authors use deep neural networks to represent the robot's policy and value functions. The paper discusses how the neural networks are trained using backpropagation and how they are used to predict the expected rewards for different actions in different states. The paper also discusses how the neural networks are updated over time as the robot learns from new experiences.
Probabilistic Methods.   Explanation: The paper discusses a Bayesian formalism for wavelet threshold estimation in non-parametric regression. The prior distribution imposed on the wavelet coefficients is designed to capture the sparseness of wavelet expansion common to most applications. The posterior median yields a thresholding procedure. The paper also establishes a relation between the hyperparameters of the prior model and the parameters of Besov spaces, which gives insight into the meaning of the Besov space parameters. The paper proposes a standard choice of prior hyperparameters that works well in their examples. All of these aspects are related to probabilistic methods in AI.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the Laplace approximation for the marginal likelihood, which is a probabilistic method used for parameter estimation. The paper also compares two choices of basis for models parameterized by probabilities.   Theory: The paper presents a theoretical comparison of two choices of basis for models parameterized by probabilities, showing that it is possible to improve on the traditional choice. The paper also discusses the basis-dependent nature of maximum a posteriori optimization of parameters and the Laplace approximation for the marginal likelihood.
Theory.   Explanation: The paper focuses on developing an algorithm for the Perfect Phylogeny Problem, which is a classical problem in computational evolutionary biology. The authors make observations about the structure of the problem and propose an algorithm that runs efficiently for large values of k. They also show how to efficiently build a structure that represents the set of all perfect phylogenies and to randomly sample from that set. The paper does not involve any application of Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, or Rule Learning.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses belief networks, which are a form of probabilistic network representation used in the development of intelligent systems in the field of artificial intelligence. The paper also discusses local learning algorithms that can be derived for belief networks, which operate using only information that is directly available from the normal, inferential processes of the networks.  Neural Networks: The paper discusses neural networks, which represent parameterized algebraic combinations of nonlinear activation functions and have found widespread use as models of real neural systems and as function approximators because of their amenability to simple training algorithms. The paper also discusses local learning algorithms that can be derived for belief networks, which have a certain biological plausibility and allow for a massively parallel implementation.
Probabilistic Methods.   Explanation: The paper presents a parallel algorithm for learning Bayesian inference networks from data, which is a probabilistic method in AI. The paper also discusses the use of an MDL-based score metric, which is a common approach in probabilistic methods for model selection.
Probabilistic Methods.   Explanation: The paper investigates the relationship between two popular probabilistic algorithms, the EM algorithm and the Gibbs sampler, and compares their rates of convergence. The paper also discusses how improvements in one algorithm can be directly applied to the other. The examples used in the paper are all based on generalized linear mixed models, which are probabilistic models.
Neural Networks, Theory.   Neural Networks: The paper uses a BCM unsupervised network for feature extraction. This is a type of neural network that is based on the BCM theory of synaptic plasticity.   Theory: The paper discusses the use of wavelet time/frequency decomposition and compares different feature extraction methods. It also suggests that nonlinear feature extraction from wavelet representations outperforms linear choices of basis functions. These discussions are related to the theoretical aspects of signal processing and feature extraction.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses the use of backpropagation, a type of neural network algorithm, for the isolated-letter speech-recognition task. It also compares the performance of error-correcting output codes with other neural network-based approaches such as Sejnowski and Rosenberg's NETtalk system.  Rule Learning: The paper discusses the use of decision-tree algorithms such as ID3 and CART, which are examples of rule-learning algorithms, for multiclass learning problems. It also compares the performance of error-correcting output codes with other rule-learning approaches such as binary concept learning algorithms and distributed output codes.
Rule Learning.   Explanation: The paper discusses a specific inductive inference rule called inverse entailment proposed by Muggleton, and gives a completeness theorem for it. The paper also discusses the use of saturant generalization, which is a rule learning technique proposed by Rouveirol. The focus of the paper is on deriving hypotheses from examples and background theories using rule-based methods, which falls under the sub-category of AI known as rule learning.
Probabilistic Methods.   Explanation: The paper presents an algorithm for arc reversal in Bayesian networks, which is a probabilistic method used in AI for modeling uncertain knowledge and reasoning under uncertainty. The paper also discusses the advantages of this algorithm for the simulation of dynamic probabilistic networks, which further emphasizes its relevance to probabilistic methods in AI.
Rule Learning, Theory.   The paper presents a novel approach to learning first order logic formulae, which falls under the category of Rule Learning. The approach is based on a clausal representation of the learned formulae, which corresponds to a conjunctive normal form where each conjunct forms a constraint on positive examples. The paper also discusses the theoretical foundations of the approach and its relationship to classical attribute value learning, which falls under the category of Theory.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper describes and compares three algorithms that learn axis-parallel rectangles to solve the multiple-instance problem.   Probabilistic Methods are also present in the text as the paper discusses the ambiguity of training examples and the need to identify which feature vectors are responsible for the observed classifications, which involves probabilistic reasoning.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper describes a new class of data structures called bumptrees that are useful for efficiently implementing a number of neural network related operations. The empirical comparison with radial basis functions on a robot arm mapping learning task also suggests a connection to neural networks.  Probabilistic Methods: The paper outlines applications of bumptrees to density estimation and classification, which are both probabilistic methods.
This paper belongs to the sub-category of AI called Neural Networks.   Explanation: The paper proposes a method for automatically defining modular neural networks, which involves the use of neural networks to optimize the architecture and parameters of the modular networks. The paper also discusses the advantages of using neural networks for this task, such as their ability to learn complex patterns and their flexibility in adapting to different problem domains. Therefore, the use of neural networks is central to the approach presented in the paper.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper proposes the use of Bayesian locally weighted regression models with stochastic dynamic programming to exploit uncertainty estimates on the fit of the learned model.   Reinforcement Learning: The paper addresses the case where the system must be prevented from having catastrophic failures during learning, which is a common concern in reinforcement learning. The algorithm proposed in the paper is adapted from the dual control literature and uses stochastic dynamic programming, which is a common approach in reinforcement learning. The paper also mentions the reinforcement learning assumption that aggressive exploration should be encouraged, but addresses the converse case in which the system has to reign in exploration.
Rule Learning, Theory.   Explanation: The paper discusses an ILP system called ICL that learns first order logic formulae from positive and negative examples, which falls under the sub-category of Rule Learning. The paper also discusses extensions of ICL to handle multi-class problems and continuous values, which involves theoretical discussions on how to adapt discretization techniques from attribute value learners, thus falling under the sub-category of Theory.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper presents a learning controller that uses a connectionist approach, consisting of two networks: the policy network and the exploration network.   Reinforcement Learning: The learning controller is trained in a supervised way by a suboptimal task frame controller, followed by a reinforcement learning phase. The controller can be extended with a third network: the reinforcement network. The experiments are simulated using a CAD-based contact force simulator, and the performance of the peg-into-hole task is measured in insertion time and average/maximum force level. The paper emphasizes the importance of model-free learning techniques for repetitive robotic assembly tasks.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of a neural network-based auditory perception system for sound localization in a humanoid robot. The system is trained using a dataset of sound sources and their corresponding locations, and the neural network is used to predict the location of a sound source based on the input received from the robot's microphones.  Probabilistic Methods: The paper also discusses the use of probabilistic methods for sound localization, specifically the use of a Gaussian mixture model (GMM) to model the distribution of sound sources in the environment. The GMM is used to estimate the probability of a sound source being located at a particular position, given the input received from the robot's microphones.
Probabilistic Methods.   Explanation: The paper discusses the use of randomized encouragement in indirect experiments to assess causal influences among variables of interest. This involves probabilistic methods, as the results are based on statistical analysis of the likelihood of the observed outcomes occurring by chance. The paper does not discuss any of the other sub-categories of AI listed.
Theory.   Explanation: The paper focuses on the theoretical problem of system identification in H 1 with nonuniformly spaced frequency response measurements. It derives a large class of robustly convergent identification algorithms and provides explicit worst case error bounds. While some of the algorithms may involve probabilistic methods or neural networks, the main focus of the paper is on the theoretical foundations of the problem and the development of rigorous mathematical techniques for solving it. Therefore, the paper belongs primarily to the sub-category of Theory in AI.
Theory.   Explanation: This paper primarily focuses on the theoretical approach of exploiting instruction-level parallelism on a Raw machine through spatial and temporal instruction scheduling. While the paper does mention the use of a SUIF-based compiler, it does not delve into the specifics of any AI sub-category such as neural networks or genetic algorithms.
Reinforcement Learning, Neural Networks  The paper belongs to the sub-category of Reinforcement Learning as it discusses the learning process of a humanoid hand through trial and error, where the hand is rewarded for successful grasping and manipulation tasks. The paper also mentions the use of neural networks in the learning process, specifically a deep convolutional neural network for object recognition and a recurrent neural network for sequence learning. These neural networks are used to process sensory information and generate motor commands for the hand.
Neural Networks, Case Based.   Neural Networks: The paper discusses the use of neural networks for learning and how previous knowledge can be used to initialize and constrain the learning process.  Case Based: The paper discusses the use of previously learned knowledge to guide further learning in the same domain, which is a key aspect of case-based reasoning.
Neural Networks, Theory.   Neural Networks: The paper discusses the limitations of recurrent analog neural nets in recognizing regular languages when subject to Gaussian or other common noise distributions. It also presents a method for constructing feedforward analog neural nets that are robust against such noise.   Theory: The paper provides a precise characterization of the regular languages that can be recognized by recurrent analog neural nets subject to Gaussian or other common noise distributions. It also implies constraints on the possibilities for constructing such neural nets that are robust against realistic types of analog noise.
Neural Networks, Probabilistic Methods, Rule Learning.   Neural Networks: The paper describes a system that converts symbolic rules into a connectionist network and trains it using connectionist techniques such as backpropagation. It also discusses the possibility of modifying network architectures using the UPSTART algorithm.  Probabilistic Methods: The system described in the paper is for revising probabilistic rule bases. It uses ID3's information-gain heuristic to add new rules and modifies the certainty factors of the rule base.  Rule Learning: The paper focuses on revising certainty-factor rule bases and describes a system that converts symbolic rules into a connectionist network and trains it using connectionist techniques. It also discusses adding new rules using ID3's information-gain heuristic.
Rule Learning, Case Based  Explanation:  - Rule Learning: The paper discusses the supervised induction of a distance from examples described as Horn clauses or constrained clauses. This approach is discrimination-driven, where a small set of complex discriminant hypotheses are defined to serve as new concepts for redescribing the initial examples. This is a form of rule learning, where rules are learned from examples to classify new instances. - Case Based: The paper also discusses using the induced distance for classification via a k-nearest-neighbor process. This is a form of case-based reasoning, where new instances are classified based on their similarity to previously observed instances.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses probability theory and its limitations in decision-making. It also mentions probability modeling as a method for representing uncertainties.   Reinforcement Learning: The paper uses reinforcement learning to find the optimal sequence of questions in a diagnosis situation while maintaining high accuracy. It also demonstrates how temporal-difference learning can improve diagnosis in a heart-disease domain.
Probabilistic Methods.   Explanation: The paper discusses the Simple Bayesian Classifier (SBC), which is a probabilistic method for classification. The SBC is built based on a conditional independence model of each attribute given the class, and the paper describes how this model can be visualized to aid in exploratory data analysis. The paper does not discuss any other sub-categories of AI.
This paper belongs to the sub-category of AI called Reinforcement Learning. Reinforcement learning is the process of learning through trial and error by receiving feedback in the form of rewards or punishments. This paper discusses the use of reinforcement learning in the context of neural networks evolving to perform sequential decision tasks. The authors describe how the neural networks are trained using a reinforcement learning algorithm called Q-learning, which involves updating the network's weights based on the difference between the predicted and actual rewards received for each action taken. The paper also discusses the use of fitness functions to evaluate the performance of the evolving neural networks, which is a common technique in reinforcement learning.
Neural Networks.   Explanation: The paper describes the development of computer architectures for efficient execution of artificial neural network algorithms, specifically using the Ring Array Processor (RAP) to simulate variable precision arithmetic and guide the design of higher performance neurocomputers based on custom VLSI. The study focuses on back-propagation training algorithms and the use of reduced precision arithmetic for efficient processing. The paper also discusses the design of a programmable single chip microprocessor, SPERT, for moderate-precision fixed-point arithmetic applications. Overall, the paper is primarily focused on the use of neural networks and their efficient implementation.
This paper belongs to the sub-categories of AI: Genetic Algorithms, Neural Networks, Reinforcement Learning.   Genetic Algorithms: The paper discusses the use of genetic algorithms in evolving networks (Belew et al.), training feed-forward neural networks (McInerney and Dhawan), selecting features for neural network classifiers (Brill et al.), designing cellular neural networks (Dellaert and Vandewalle), and efficient reinforcement learning through symbiotic evolution (Moriarty and Miikkulainen). The paper also references the GENITOR algorithm (Whitely) and the Handbook of Genetic Algorithms (Davis).  Neural Networks: The paper discusses the use of neural networks in evolving networks (Belew et al.), training feed-forward neural networks (McInerney and Dhawan), selecting features for neural network classifiers (Brill et al.), and designing cellular neural networks (Dellaert and Vandewalle).  Reinforcement Learning: The paper discusses efficient reinforcement learning through symbiotic evolution (Moriarty and Miikkulainen).
Probabilistic Methods.   The paper discusses the limitations of existing machine learning techniques in addressing real design problems that involve context and multiple, often conflicting, interests. It proposes an alternative approach that partially integrates machine learning into a modeling system called n-dim. The use of machine learning in n-dim is presented, and open research issues are outlined. This approach involves probabilistic methods that can handle uncertainty and multiple sources of information.
Probabilistic Methods.   Explanation: The paper discusses an algorithm for estimating smooth functions using smoothing splines, and the direction coefficients, amount of smoothing, and number of terms are determined to optimize a single generalized cross-validation measure. This optimization process involves probabilistic methods, as it involves finding the optimal values for these parameters based on a probabilistic measure of model fit.
Rule Learning, Theory.   Explanation:  - Rule Learning: The paper describes a learning algorithm that improves the performance of a top-down inductive logic programming (ILP) system by using extra information in the form of an algorithm sketch. The learning algorithm exploits the information contained in the sketch to refine the rules learned by the ILP system.  - Theory: The paper proposes a mechanism for improving the performance of ILP systems and describes the details of the learning algorithm that exploits the information contained in the sketch. The experiments carried out with the implemented system demonstrate the usefulness of the method and its potential in future applications.
Rule Learning.   Explanation: The paper is concerned with the problem of inducing recursive Horn clauses from small sets of training examples, which is a task in rule learning. The method presented in the paper, iterative bootstrap induction, is a rule learning technique that generates simple clauses as properties of the required definition and uses them to induce the required recursive definitions. The experiments conducted in the paper also support the effectiveness of the method in rule learning.
Genetic Algorithms, Neural Networks.   Genetic Algorithms are the main focus of the paper, as they are used to optimize subsystems of cellular neural network architectures for character recognition. The paper presents a genetic encoding for a feature detector and describes an experiment where an optimal feature detector is found using the genetic algorithm.   Neural Networks are also relevant, as the paper discusses the use of cellular neural networks in computer vision and the optimization of specific sub-modules of the system using genetic algorithms. The specific problem being investigated is character recognition using a conventional classifier network aided by an optimal feature detector.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of neural networks in financial time series analysis and compares the variability in the solution due to different network conditions (such as parameter initialization and number of hidden units) with the variability due to different data splits.   Probabilistic Methods: The paper uses a bootstrap or resampling method to compare the uncertainty in the solution stemming from the data splitting with neural network specific uncertainties. The authors also warn about drawing too strong conclusions from static data splits and highlight the potential pitfalls of ignoring variability across splits.
Reinforcement Learning, Probabilistic Methods  This paper belongs to the sub-categories of Reinforcement Learning and Probabilistic Methods. Reinforcement Learning is present in the paper as the authors propose a reinforcement learning-based approach to estimate shortest paths in dynamic graphs. They use Q-learning, a popular reinforcement learning algorithm, to learn the optimal policy for packet routing in dynamic graphs. Probabilistic Methods are also present in the paper as the authors use a probabilistic model to estimate the probability of link failures in the network. They use this information to update the Q-values in the reinforcement learning algorithm.
Neural Networks.   Explanation: The paper discusses the use of validation during supervised training of neural networks to detect overfitting and stop training early to avoid it. The empirical investigation specifically focuses on multi-layer perceptrons, which are a type of neural network. There is no mention of any other sub-category of AI in the text.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper introduces a formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. This involves probabilistic reasoning and statistical analysis.  Theory: The paper presents a new formal model for learning from a population of hypotheses, which is a theoretical contribution to the field of machine learning. The motivation for the model is also discussed, which involves questions about how to make optimal use of multiple independent runs of a mediocre learning algorithm and settings in which the many hypotheses are obtained by a distributed population of identical learning agents.
Probabilistic Methods.   Explanation: The paper discusses the use of Markov chain Monte Carlo (MCMC) methods, which are a type of probabilistic method used in Bayesian modeling. The panel of experts discusses various issues related to the use of MCMC, such as building confidence in simulation results, assessing standard errors, and identifying models for which good MCMC algorithms exist. The paper does not discuss any of the other sub-categories of AI listed.
Neural Networks, Genetic Algorithms.   Neural Networks: The paper discusses the training of neural network models using real-valued weights.   Genetic Algorithms: The paper presents an algorithm for doing a schemata search over a real-valued weight space to find a set of weights that yield high values for a given evaluation function. This algorithm uses the BRACE statistical technique to determine when to narrow the search space, which is a common technique used in genetic algorithms.
Probabilistic Methods.   Explanation: The paper is focused on density estimation, which is a probabilistic method used to estimate the probability density function of a random variable. The approach used in the paper falls under the class of projection estimators, which is a probabilistic method for density estimation. The paper also discusses the use of wavelets, which are a mathematical tool used in probabilistic methods for signal and image processing. Therefore, the paper belongs to the sub-category of Probabilistic Methods in AI.
Probabilistic Methods, Neural Networks, Theory.   Probabilistic Methods: The model presented in the paper is based on a saliency map, which is a probabilistic approach to selecting a subset of sensory information for further processing.   Neural Networks: The model for the control of the focus of attention is based on a saliency map, which is a type of neural network that assigns a saliency value to each location in the visual field.   Theory: The paper presents a model for the control of selective visual attention in primates, which is based on a saliency map. The model is not only expected to model the functionality of biological vision but also to be essential for the understanding of complex scenes in machine vision. This is a theoretical approach to understanding how selective visual attention works in both biological and artificial systems.
Probabilistic Methods.   Explanation: The paper discusses the problem of hypothesis testing from a Bayesian point of view, which involves using probability distributions to model uncertainty and updating beliefs based on observed data. The paper also provides exact and asymptotic Bayesian results for testing hypotheses of independence and dependence, which are probabilistic methods for making statistical inferences. The use of mutual information as a measure of dependence is also a probabilistic method, as it involves calculating the amount of information shared between two random variables.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses different methods for learning Bayesian networks from data, which fall under the category of probabilistic methods.   Neural Networks: The paper also draws connections between the statistical, neural network, and uncertainty communities, indicating the presence of neural networks in the discussion.
Probabilistic Methods. This paper belongs to the sub-category of Probabilistic Methods in AI. The paper discusses the use of Bayesian networks, which are factored representations of probability distributions, for inducing classifiers from data. The naive Bayes classifier, which is a simple Bayesian classifier with strong assumptions of independence among features, is also discussed. The paper evaluates different approaches for inducing classifiers from data based on recent results in the theory of learning Bayesian networks. The Tree Augmented Naive Bayes (TAN) method, which outperforms naive Bayes, is also discussed.
Probabilistic Methods.   Explanation: The paper discusses learning probabilistic belief networks, which is a type of probabilistic method in AI. The paper proposes a new method for learning network structure from incomplete data, which is based on an extension of the Expectation-Maximization (EM) algorithm for model selection problems. The paper also describes how to learn networks in two scenarios: when the data contains missing values, and in the presence of hidden variables. All of these are examples of probabilistic methods in AI.
Probabilistic Methods, Theory  Probabilistic Methods: The paper proposes a probabilistic method for static data association, which involves estimating the probability of a measurement being associated with a particular target. The authors use a Bayesian approach to update the probability distribution over target states based on the measurements received. They also use a terrain-based prior density to incorporate prior knowledge about the likely locations of targets.  Theory: The paper presents a theoretical framework for static data association, which involves modeling the problem as a Bayesian inference task. The authors derive the equations for updating the probability distribution over target states based on the measurements received, and they also explain how to incorporate prior knowledge about the likely locations of targets using a terrain-based prior density. The paper also includes a discussion of the limitations of the proposed method and suggestions for future research.
Rule Learning, Theory.   The paper presents a framework for problems involving construction of decision trees or rules, which falls under the sub-category of Rule Learning. The paper also discusses the systematic description of greedy algorithms for cost-sensitive generalization, which falls under the sub-category of Theory.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods:  The FLARE algorithm presented in the paper is based on probabilistic methods. It uses Bayesian networks to model the relationships between variables and make predictions. The authors also discuss the use of Markov Chain Monte Carlo (MCMC) methods for inference in the Bayesian network.  Rule Learning:  The FLARE algorithm also involves rule learning. The authors describe how the algorithm can incorporate prior knowledge in the form of rules, which are used to constrain the structure of the Bayesian network. The rules are learned from expert knowledge or from data, and are used to guide the search for the optimal network structure.
Probabilistic Methods, Reinforcement Learning, Theory.   Probabilistic Methods: The paper discusses the use of partial assignments, which can be interpreted in several ways, as a form of partial information. This is a probabilistic approach to learning and reasoning.  Reinforcement Learning: The paper discusses the interaction with the world supplying the learner with partial information, which can be seen as a form of reinforcement learning.  Theory: The paper presents a framework for Learning to Reason, which combines the study of Learning and Reasoning into a single task. The paper also discusses the tradeoff between learnability, the strength of the oracles used in the interface, and the range of reasoning queries the learner is guaranteed to answer correctly, which is a theoretical consideration.
Probabilistic Methods. This paper belongs to the sub-category of Probabilistic Methods in AI. The paper discusses the problem of evaluating the probability that a propositional expression is true, which is a common problem in probabilistic reasoning. The paper also discusses various methods used in approximate reasoning, such as computing degree of belief and Bayesian belief networks, which are probabilistic methods. The paper proves that counting satisfying assignments of propositional languages is intractable even for Horn and monotone formulae, which are commonly used in probabilistic reasoning. Finally, the paper identifies some restricted classes of propositional formulae for which efficient algorithms for counting satisfying assignments can be given, which is relevant to probabilistic reasoning.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of extended Kalman filter (EKF) in training recurrent neural networks (RNNs) and proposes a pruning method based on the results obtained by EKF training.   Probabilistic Methods: The paper uses the EKF algorithm, which is a probabilistic method, to estimate the parameters of the RNN. The proposed pruning method is also based on the probabilistic estimates obtained from EKF training.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper describes the use of genetic programming techniques to produce music-making programs that satisfy user-provided critical criteria. The resulting neural networks can be used as critics that drive the genetic programming system.  Neural Networks: The paper also describes new work on the use of connectionist techniques to automatically induce musical structure from a corpus. The resulting neural networks can be used as critics that drive the genetic programming system. The framework presented in the paper potentially supports the induction and recapitulation of deep structural features of music.
Rule Learning, Theory.   The paper describes a method called ILA (Inductive Learning with Prior Knowledge and Reasoning) which combines inductive learning with prior knowledge and reasoning. The method involves learning rules from examples and incorporating prior knowledge in the form of constraints on the learned rules. The paper also discusses the theoretical foundations of the ILA method and provides experimental results to demonstrate its effectiveness. Therefore, the paper belongs to the sub-categories of Rule Learning and Theory.
Reinforcement Learning, Rule Learning, Theory.   Reinforcement learning is present in the text as the paper discusses how the nature of the opposition during training affects learning to play two-person, perfect information board games. This is a classic example of reinforcement learning where the program learns through trial and error by receiving feedback in the form of rewards or penalties.  Rule learning is present in the text as the paper discusses appropriate metrics for post-training performance measurement and the ways those metrics can be applied. These metrics can be seen as rules that the program learns to follow in order to improve its performance.  Theory is present in the text as the paper discusses the impact of trainer error and argues for a broad variety of training experience with play at many levels. These discussions are based on theoretical concepts and ideas about how learning works and how it can be improved.
Rule Learning, Reinforcement Learning.   Rule Learning is present in the text as the paper discusses the identification of parameters that affect deductive learning and the systematic experimentation to understand the nature of those effects. The paper also discusses the study of two parameters: the point on the satisficing-optimizing scale that is used during the search carried out during problem-solving time and during learning time.  Reinforcement Learning is present in the text as the paper discusses the utility of macros in problem-solving and how the strategy used during problem-solving affects the efficiency of problem solvers. The paper also discusses the sensitivity of deductive learners to the type of search used during learning and how optimizing search improves the efficiency for problem solvers that require a high level of optimality.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper focuses on representing actions with stochastic effects using Bayesian networks and influence diagrams. It compares different techniques for specifying the dynamics of a system and proposes solutions to deal with the frame problem.   Theory: The paper discusses the frame problem and its implications for representing probabilistic system dynamics. It also compares the proposed solutions with Reiter's solution to the frame problem for the situation calculus.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper presents a randomized algorithm for solving the task of reconstructing polynomials, which accesses the function f as a black box. This algorithm is a probabilistic method.  Theory: The paper presents a theoretical analysis of the problem of reconstructing polynomials from noisy data, and provides a randomized algorithm with polynomial running time. The paper also generalizes a previously known algorithm.
Reinforcement Learning, Rule Learning.   Reinforcement learning is present in the paper as it discusses the approach of modifying an initial policy based on its performance, which is a key aspect of reinforcement learning.   Rule learning is also present in the paper as it compares two approaches to learning control rules: behavior cloning (which involves emulating a perfect operator's behavior) and experimental learning (which involves guessing an initial policy and modifying it based on performance). The latter approach can be seen as a form of rule learning, as the learner is trying to discover a set of rules that lead to optimal performance.
Neural Networks.   Explanation: The paper presents a connectionist method for representing images that explicitly addresses their hierarchical nature, blending data from neuroscience about whole-object viewpoint sensitive cells and attentional basis-field modulation with ideas about hierarchical descriptions based on microfeatures. The resulting model makes critical use of bottom-up and top-down pathways for analysis and synthesis, and is illustrated with a simple example of representing information about faces. These are all characteristics of neural network models.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper discusses experiments that use genetic programming systems to evolve computer programs that perform cognitive tasks. These systems include special mechanisms for cultural transmission of information, which can have a beneficial impact on the evolvability of correct programs.  Theory: The paper discusses the role of culture in the evolution of cognitive systems and defines culture as any information transmitted between individuals and between generations by non-genetic means. The implications of the results for cognitive science are also briefly discussed.
Genetic Algorithms, Probabilistic Methods, Rule Learning.   Genetic Algorithms: The paper discusses the formulation and optimization of design problems, which is a common application of genetic algorithms. The system developed in the paper allows for the interactive generation and testing of optimization strategies, which is a key feature of genetic algorithms.  Probabilistic Methods: The paper discusses the impact of the formulation of the search space, objective function, and constraints on the optimization process, which is a common consideration in probabilistic methods. The system developed in the paper allows for the experimental evaluation of optimization strategies on test problems, which is a key feature of probabilistic methods.  Rule Learning: The paper discusses the interactive formulation, testing, and reformulation of design optimization strategies, which is a common application of rule learning. The system developed in the paper represents optimization strategies as dataflow graphs and allows for the transformation between these graphs, which is a key feature of rule learning.
Reinforcement Learning, Probabilistic Methods, Theory.   Reinforcement learning is the most related sub-category as the paper presents incremental planning methods based on updating an evaluation function and situation-action mapping of a reactive system, which is a key concept in reinforcement learning.   Probabilistic methods are also relevant as the paper mentions that the incremental planning methods are well suited to stochastic tasks, which involve uncertainty and probability.   Finally, the paper belongs to the Theory sub-category as it presents the basic results and ideas of dynamic programming as they relate to planning in AI, forming the theoretical basis for the incremental planning methods used in the Dyna architecture.
Rule Learning, Probabilistic Methods  The paper belongs to the sub-category of Rule Learning as it discusses the design and evaluation of a rule induction algorithm. The algorithm is used to learn rules from data and is based on a probabilistic approach. The paper also discusses the use of probability in evaluating the quality of the learned rules. Therefore, the paper also belongs to the sub-category of Probabilistic Methods.
Case Based, Probabilistic Methods.   Case Based: The paper reports about a project on document retrieval using a CBR (Case-Based Reasoning) approach. The system developed is a running prototypical system which is currently under practical evaluation.   Probabilistic Methods: The objective of the project is to provide a tool that helps finding documents related to a given query, such as answers in Frequently Asked Questions databases. This involves probabilistic methods such as calculating the probability of a document being relevant to a given query.
Theory.   Explanation: The paper is focused on theoretical analysis of the query complexity of exact learning in the membership and (proper) equivalence query model. It does not involve any practical implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning.
Case Based, Rule Learning.   Case-based reasoning is the main focus of the paper, as the evaluation is done on Anapron, a system that uses a combination of rule-based and case-based reasoning to pronounce names. The paper also discusses lessons learned for CBR evaluation methodology and CBR theory. Rule learning is also mentioned as a component of Anapron's operation.
Reinforcement Learning, Theory.   Reinforcement learning is directly mentioned in the abstract as the context for the value function being discussed. The paper is focused on deriving a theoretical bound on the performance of a greedy policy based on an imperfect value function, which falls under the category of theory.
Neural Networks, Theory.   Neural Networks: The paper presents an algorithm for training neural networks to implement the CDM.   Theory: The paper develops a theoretical framework for measuring the quality of vector quantization points and function approximation. It introduces the concept of a canonical distortion measure and shows how it can be calculated for different function classes. It also justifies the use of this measure by demonstrating that optimizing the reconstruction error of X with respect to the CDM gives rise to optimal piecewise constant approximations of the functions in the environment.
Theory.   Explanation: The paper discusses a theory revision system and its optimization, which falls under the category of theory-based AI methods. The paper does not mention any other sub-categories of AI such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods.   Explanation: The paper describes a nonparametric approach for estimating density and hazard rate functions from randomly right censored data. The method is based on counting the number of events within each interval and then smoothing the number of events and the survival function separately over time via linear wavelet smoothers. The hazard rate function estimators are obtained by taking the ratio. The paper also proves that the estimators possess pointwise and global mean square consistency, obtain the best possible asymptotic MISE convergence rate and are also asymptotically normally distributed. These are all characteristics of probabilistic methods.
Case Based, Rule Learning  Explanation:  The paper primarily focuses on case-based reasoning, which falls under the category of Case Based AI. The paper also discusses the difficulty of encoding effective adaptation rules by hand, which suggests the presence of Rule Learning.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms: The paper describes a method for using genetic algorithms to evolve buildable objects for evolutionary design by computers. The authors explain how they use a genetic algorithm to generate and evolve a population of objects, with each object represented as a string of genes that encode its properties. The fitness of each object is evaluated based on its ability to be built using a 3D printer, and the fittest objects are selected for reproduction and mutation to create the next generation.  Reinforcement Learning: The paper also discusses the use of reinforcement learning to improve the fitness of the evolved objects. The authors describe how they use a simulation environment to evaluate the fitness of the objects, and how they use reinforcement learning to optimize the parameters of the simulation to better match the real-world behavior of the 3D printer. This allows the genetic algorithm to evolve objects that are not only buildable, but also optimized for the specific 3D printer being used.
Neural Networks.   Explanation: The paper discusses the use of computational models, specifically feed-forward neural network models, to explain the double dissociation between prosopagnosia and visual object agnosia. The models incorporate a competitive selection mechanism and biasing of modules to account for the specialization of face processing in the brain. Therefore, the paper primarily belongs to the sub-category of Neural Networks in AI.
Probabilistic Methods.   Explanation: The paper focuses on Bayesian networks, which are probabilistic graphical models used for probabilistic reasoning and decision-making under uncertainty. The paper specifically addresses the issue of missing data in Bayesian networks and proposes a method for robust parameter learning in such scenarios. The paper uses probabilistic methods such as maximum likelihood estimation and Bayesian inference to estimate the parameters of the Bayesian network.
Rule Learning, Theory.   Rule Learning is the most related sub-category as the paper discusses the adaptation and linking of ILP-systems to relational database systems for knowledge discovery, which involves learning rules from data.   Theory is also relevant as the paper discusses the theoretical basis of ILP and its potential applications in the knowledge discovery field.
Probabilistic Methods, Reinforcement Learning, Theory.   Probabilistic Methods: The paper discusses the use of dynamic programming for solving general pomdps, which involves probabilistic methods for modeling uncertainty.   Reinforcement Learning: The paper focuses on solving partially observable Markov decision processes (pomdps), which are a type of reinforcement learning problem.   Theory: The paper presents and compares different algorithms for solving pomdps, and discusses their theoretical and empirical performance. It also proposes a new algorithm, incremental pruning, and provides theoretical and empirical evidence for its efficiency.
Theory.   Explanation: The paper focuses on improving error bounds based on VC analysis for machine learning classes with sets of similar classifiers. It does not discuss any specific AI techniques or applications, but rather presents theoretical results that can be applied to various machine learning algorithms, including separating planes and artificial neural networks. Therefore, the paper belongs to the sub-category of AI theory.
Rule Learning, Theory.   Explanation:  This paper belongs to the sub-category of Rule Learning because it describes the use of formal grammars as a means of detecting and assembling higher-order structures in biological sequences. The authors describe a grammar and parser for eukaryotic protein-encoding genes, which is optimized for several different species.   It also belongs to the sub-category of Theory because it presents a theoretical framework for gene structure prediction based on linguistic methods. The authors discuss the relative importance of compositional, signal-based, and syntactic components in gene prediction, and perform mixing experiments to determine the degree of species specificity.
Neural Networks, Theory.   Neural Networks: The paper discusses a feed-forward computational model of visual processing that involves two competing modules for classifying input stimuli.   Theory: The paper presents a theoretical explanation for the double dissociation between prosopagnosia and visual object agnosia, proposing that face and non-face object recognition may be served by partially independent mechanisms in the brain. It also discusses the underlying mechanisms of normal adult performance on face and object recognition tasks, suggesting that face recognition is primarily configural and object recognition is primarily featural.
Neural Networks.   Explanation: The paper specifically mentions the application of the proposed method to neural networks, indicating that it falls under the sub-category of AI related to neural networks. No other sub-categories are mentioned or implied in the text.
Neural Networks, Probabilistic Methods, Theory.   Neural Networks: The paper presents a model of visual cortical plasticity based on the BCM theory, which is a model of neural network learning. The paper discusses the behavior and evolution of the network under various visual rearing conditions.  Probabilistic Methods: The paper discusses the connection between the unsupervised BCM learning procedure and various statistical methods, including Projection Pursuit. The paper also notes that the BCM theory involves a sophisticated statistical procedure.  Theory: The paper presents an objective function formulation of the BCM theory, which provides a general method for stability analysis of the fixed points of the theory. The paper also discusses the behavior and evolution of the network under various visual rearing conditions and allows comparison with many existing unsupervised methods.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents a hybrid (supervised and unsupervised) Neural Network for the classification of normalized face images.   Probabilistic Methods: The paper does not explicitly mention the use of probabilistic methods, but the classification process of the Neural Network likely involves probabilistic calculations.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses the use of the EM algorithm for parameter estimation in RBF neural networks, which is a probabilistic method.  Neural Networks: The paper focuses on the use of RBF neural networks for process control and discusses their advantages in approximating highly nonlinear plants and being well suited for linear adaptive control. The paper also discusses the interpretation of RBFs as mixtures of Gaussians, which is a common approach in neural network modeling.
Case Based, Rule Learning  Explanation:   - Case Based: The paper describes CABINS, a framework that uses case-based reasoning to optimize solutions in ill-structured domains. CABINS creates an initial model of the optimization task through task structure analysis, and then specializes generic vocabularies into case feature descriptions for application problems. The framework improves the model through the accumulation of cases.  - Rule Learning: While the paper does not explicitly mention rule learning, CABINS can be seen as a form of rule learning, as it creates rules for optimizing solutions based on past cases. The framework iteratively revises the model through case-based reasoning, accumulating knowledge and improving the rules for future optimization tasks.
Neural Networks.   Explanation: The paper discusses the behavior of single compartment Hodgkin-Huxley model neurons, which are a type of artificial neural network. The authors suggest that the neurons may operate in a narrow parameter regime where synaptic and intrinsic conductances are balanced to reflect detailed correlations in the inputs, which is a key characteristic of neural networks. The paper does not discuss any other sub-categories of AI.
Genetic Algorithms, Neural Networks.   Genetic Algorithms: The paper proposes a classification of the encoding strategies used in genetic algorithms for neural network optimization. It also gives a critical analysis of the current state of development in this field.   Neural Networks: The paper discusses the application of genetic algorithms to the optimization of artificial neural networks. It also mentions the metaphor of the evolution of the human brain as inspiration for this approach. The paper analyzes the different techniques used to encode neural networks for genetic algorithms.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes an object recognition scheme based on a method for feature extraction from gray level images that corresponds to a biologically motivated feature extracting neuron.   Probabilistic Methods: The feature extraction method used in the paper is based on projection pursuit, which is a statistical theory. The paper also evaluates the performance of the method using psychophysical 3D object recognition experiments.
Probabilistic Methods.   Explanation: The paper proposes a wavelet shrinkage method by imposing natural properties of Bayesian models on data. The Bayes rules and Bayes factors are used to perform nonlinear wavelet shrinkage. This approach is based on probabilistic methods, which involve the use of probability theory to model uncertainty and make predictions.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of neural networks for the prediction of foreign exchange rates and how clearning can improve their performance. The paper also mentions the architecture of the neural network used, including the number of inputs and hidden units.  Probabilistic Methods: The paper discusses the statistical foundation of clearning from a maximum likelihood perspective, which is a probabilistic method. The paper also mentions how clearning can be used to estimate the overall signal-to-noise ratio of each input variable and how error estimates for each pattern can be used to detect and remove outliers, which are probabilistic methods.
Probabilistic Methods, Reinforcement Learning  Probabilistic Methods: The paper discusses the use of probabilistic models for performance enhancement, specifically in the context of Bayesian networks. The authors propose a method for constructing Bayesian network wrappers that can be used to improve the performance of existing machine learning algorithms.  Reinforcement Learning: The paper also discusses the use of oblivious decision graphs (ODGs) for reinforcement learning. The authors propose a method for constructing ODGs that can be used to learn optimal policies in a variety of settings. They demonstrate the effectiveness of their approach on several benchmark problems.
Probabilistic Methods.   Explanation: The paper compares a Winnow-based algorithm to a Bayesian classifier for context-sensitive spelling correction, which is a task in natural language processing. Both Winnow and Bayesian classifiers are examples of probabilistic methods in machine learning. The paper discusses the performance of these methods and their ability to adapt to unfamiliar test sets, which are key characteristics of probabilistic methods.
Probabilistic Methods.   Explanation: The paper discusses various notions of geometric ergodicity for Markov chains, which is a probabilistic concept. The paper also applies these concepts to a collection of chains commonly used in Markov chain Monte Carlo simulation algorithms, which is a probabilistic method for sampling from complex distributions. The other sub-categories of AI (Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, Theory) are not present in the text.
Theory.   Explanation: The paper presents a theoretical algorithm for solving the Perfect Phylogeny Problem, which is a problem in computational biology. The paper does not use any AI techniques such as neural networks, genetic algorithms, or reinforcement learning. It is purely a theoretical algorithm based on graph theory and combinatorial optimization. Therefore, the paper belongs to the sub-category of AI called Theory.
Probabilistic Methods, Reinforcement Learning  Probabilistic Methods: The paper discusses the use of probabilistic methods such as Bayesian inference and Monte Carlo simulation to model uncertainty in the extended two-dimensional pursuer/evader problem. The authors also use probabilistic methods to estimate the probability of capture and evasion.  Reinforcement Learning: The paper proposes a methodology for strategy optimization under uncertainty using reinforcement learning. The authors use a Q-learning algorithm to learn the optimal strategy for the pursuer in the extended two-dimensional pursuer/evader problem. The algorithm is trained using a simulation environment that models the uncertainty in the problem.
Neural Networks.   Explanation: The paper discusses the architecture of gated experts, which is a type of neural network consisting of a gating network and several competing experts. The paper also evaluates the performance of this architecture in comparison to single networks and networks with two outputs. The focus is on avoiding overfitting, which is a common problem in neural networks.
Probabilistic Methods.   Explanation: The paper discusses the problem of constructing a Bayesian network model of an unknown probability distribution, and proposes a method for searching the model space using a stochastic simulated annealing algorithm. The Bayesian network model is a probabilistic method for representing and reasoning about uncertain knowledge, and the simulated annealing algorithm is a probabilistic optimization method. The paper also mentions the polynomial time algorithm for Bayesian reasoning in tree-structured Bayesian networks, which is a probabilistic method for inference.
Neural Networks, Reinforcement Learning  Explanation:  The paper belongs to the sub-category of Neural Networks because it reports on the successful practical application of forward-tracking to the evolutionary training of (constrained) neural networks.   It also belongs to the sub-category of Reinforcement Learning because forward-tracking is a technique for searching beyond failure, which can be useful in reinforcement learning applications where partial satisfaction of requirements is common.
Probabilistic Methods, Theory  Probabilistic Methods: The paper analyzes and characterizes plateaus for three different classes of randomly generated Boolean Satisfiability problems. The analysis involves statistical methods and probability distributions.  Theory: The paper discusses the topology of local search algorithms and their performance in terms of finding solutions to combinatorial search problems. It also proposes strategies for creating the next generation of local search algorithms. The paper is focused on theoretical analysis rather than practical implementation.
Neural Networks, Theory.   Neural Networks: The paper proposes a neural network theory to explain how the human visual system binds together visual properties of multiple objects. The theory is based on neural mechanisms that construct and update object representations through the interactions of attentional mechanisms, preattentive grouping mechanisms, and an associative memory structure.  Theory: The paper presents a theoretical framework for solving the temporal binding problem in visual perception. The proposed theory is based on neural mechanisms and provides a unified quantitative explanation of results from psychophysical experiments on object review, object integration, and multielement tracking.
Genetic Algorithms.   Explanation: The paper explicitly discusses tracing the behavior of genetic algorithms and applies this tracing to various aspects of genetic algorithm behavior, such as stable points and fitness of schemata. The other sub-categories of AI listed are not mentioned in the paper.
Genetic Algorithms.   Explanation: The paper proposes an evolutionary solver for solving routing problems, which is a type of genetic algorithm. The solver uses a population of candidate solutions and applies genetic operators such as mutation and crossover to generate new solutions. The fitness of each solution is evaluated based on the constraints and optimization criteria of the problem. The paper compares the performance of the evolutionary solver to other solvers, including a biased random solver and a biased hillclimber solver. Therefore, the paper belongs to the sub-category of Genetic Algorithms in AI.
Case Based.   Explanation: The paper is focused on agents that learn and solve problems using Case-based Reasoning (CBR), and presents two modes of cooperation among them: Distributed Case-based Reasoning (DistCBR) and Collective Case-based Reasoning (ColCBR). The extension presented, Plural Noos, allows communication and cooperation among agents implemented in Noos by means of three basic constructs: alien references, foreign method evaluation, and mobile methods.
Probabilistic Methods, Case Based, Genetic Algorithms.   Probabilistic Methods: The paper discusses Bayesian belief networks, which are a probabilistic method for representing and reasoning about uncertainty.  Case Based: The paper proposes a combination of genetic algorithms and case-based reasoning for determining an optimal factoring for the distribution represented in the Bayesian network.  Genetic Algorithms: The paper discusses the use of genetic algorithms to improve the quality of the computed factoring in the case of a static strategy.
Neural Networks.   Explanation: The paper focuses on analyzing the computation and communication requirements of connectionist algorithms, which are a type of artificial neural network. The back-propagation algorithm, which is specifically analyzed in the paper, is a widely used neural network algorithm for training multi-layer perceptrons. Therefore, the paper belongs to the sub-category of AI known as Neural Networks.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper presents a method of constructing a world model using the backpropagation learning algorithm, which is a type of neural network. The model is used to predict future reinforcements and determine good actions directly from the knowledge of the model network.  Reinforcement Learning: The paper describes a planning method that maximizes future reinforcement to derive suboptimal plans. This is done through gradient descent in action space, which is a common technique in reinforcement learning.
Probabilistic Methods.   Explanation: The paper presents methods for computing probabilities of counterfactual queries using a formulation proposed in [Balke and Pearl, 1994], which involves interpreting the antecedent of the query as an external action that forces a proposition to be true. The paper discusses how to evaluate these probabilities when prior knowledge is available on the causal mechanisms governing the domain, and also when causal knowledge is specified as conditional probabilities on the observables. The paper demonstrates the use of these techniques in two applications related to treatment efficacy and product-safety litigation. All of these aspects are related to probabilistic methods in AI.
Probabilistic Methods.   Explanation: The paper is about Bayesian Networks, which are a type of probabilistic graphical model used for probabilistic reasoning and decision making. The paper discusses the principles and applications of Bayesian Networks, as well as their advantages and limitations. The paper also provides examples of how Bayesian Networks can be used in various fields, such as medicine, finance, and engineering. Therefore, the paper is most related to the sub-category of Probabilistic Methods in AI.
Probabilistic Methods, Neural Networks  The paper belongs to the sub-category of Probabilistic Methods as it uses logistic regression to model the probability of a binary outcome. The authors also use projection pursuit, a technique that involves finding a low-dimensional representation of high-dimensional data, to identify important features for the logistic regression model.  The paper also belongs to the sub-category of Neural Networks as the authors use a neural network to estimate the probability of a binary outcome. Specifically, they use a feedforward neural network with a single hidden layer to model the relationship between the input features and the binary outcome.
Theory  Explanation: The paper discusses an architectural technique called boosting that supports general speculative execution in simpler, statically-scheduled processors. The paper evaluates how much speculative execution support is necessary to achieve good performance. The focus is on the theoretical aspects of processor design and performance optimization, rather than on the application of specific AI sub-categories.
This paper belongs to the sub-category of AI called Neural Networks.   Explanation: The paper discusses a problem related to feature selection in neural networks, specifically the Minimum Feature Set Problem. The authors propose a solution using a genetic algorithm to optimize the feature set. The paper also includes experimental results using neural networks to demonstrate the effectiveness of their approach. Therefore, the paper is primarily focused on the application of neural networks to solve a specific problem.
This paper belongs to the sub-category of AI called Probabilistic Methods.   Explanation: The paper proposes a decision-theoretic approach to case-based reasoning, which involves probabilistic methods such as Bayesian networks and decision trees. The authors use probabilistic models to represent uncertainty and to make decisions based on the available evidence. They also discuss the use of probabilistic methods in evaluating the performance of the system and in selecting the most appropriate cases for retrieval. While other sub-categories of AI may also be relevant to this paper, such as Rule Learning and Theory, the focus on probabilistic methods is the most prominent.
Probabilistic Methods.   Explanation: The paper discusses the use of clustering algorithms to group similar learning tasks together, and then using probabilistic methods to selectively transfer knowledge between these clusters. The use of probability distributions and statistical models is a key aspect of this approach. While other sub-categories of AI may also be relevant to this research (such as neural networks for classification or reinforcement learning for decision-making), the emphasis on probabilistic methods is most prominent.
Probabilistic Methods, Case Based  Explanation:   Probabilistic Methods: The paper discusses the need to elicit probabilities and utilities in decision problems, and proposes a method for identifying a new user's utility function based on classification relative to a database of previously collected utility functions. This involves identifying clusters of utility functions that minimize an appropriate distance measure, which is a probabilistic method.  Case Based: The proposed method involves classifying a new user's utility function based on similarity to previously collected utility functions, which is a case-based approach. The paper discusses how this approach is more efficient and robust than traditional utility elicitation methods based solely on preferences.
Probabilistic Methods.   Explanation: The paper discusses the traditional training algorithm for Hidden Markov Models, which is an expectation-maximization algorithm that maximizes a posterior probability density over the model parameters. The paper then introduces ensemble learning, which approximates the entire posterior probability distribution over the parameters. The objective function that is optimized is a variational free energy, which measures the relative entropy between the approximating ensemble and the true distribution. Therefore, the paper primarily focuses on probabilistic methods for training Hidden Markov Models.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents a neural model for multielement tracking that employs an object-based attentional mechanism for constructing and updating object representations. The model selectively enhances neural activations to serially construct and update the internal representations of objects through correlation-based changes in synaptic weights.  Probabilistic Methods: The correspondence problem between items in memory and elements in the visual input is resolved through a combination of top-down prediction signals and bottom-up grouping processes. This involves probabilistic methods for solving the motion correspondence problem.
Probabilistic Methods, Neural Networks  Probabilistic Methods: This paper discusses the use of probabilistic methods such as Bayesian networks and Markov models for data value prediction. The authors explain how these methods can be used to estimate the likelihood of a particular data value based on past observations.  Neural Networks: The paper also discusses the use of neural networks for data value prediction. The authors explain how neural networks can be trained to recognize patterns in data and make predictions based on those patterns. They also discuss the advantages and disadvantages of using neural networks for this task.
Neural Networks, Theory.   Neural Networks: The paper uses nonlinearly parametrized wavelet network models for the adaptive control scheme. The Lyapunov synthesis approach is used to develop a state-feedback adaptive control scheme based on these models.   Theory: The paper discusses the design and analysis of adaptive wavelet control algorithms for uncertain nonlinear dynamical systems. It uses the Lyapunov synthesis approach to obtain semi-global stability results. The paper also proposes formal definitions of interference and localization measures.
Reinforcement Learning. This paper belongs to the sub-category of Reinforcement Learning as it discusses the efficient implementation of TD() for use with reinforcement learning algorithms optimizing the discounted sum of rewards. It proposes the TTD (Truncated Temporal Differences) procedure as an alternative to the traditional approach based on eligibility traces. The paper also mentions well-known reinforcement learning algorithms such as AHC or Q-learning, which can be viewed as instances of TD learning.
This paper belongs to the sub-categories of AI: Reinforcement Learning, Neural Networks, Probabilistic Methods, Rule Learning.   Reinforcement Learning is present in the text as the paper discusses the use of reinforcement learning algorithms to integrate different strategies and representations.   Neural Networks are present in the text as the paper discusses the use of neural networks to learn and represent different strategies.   Probabilistic Methods are present in the text as the paper discusses the use of probabilistic models to represent uncertainty in the learning process.   Rule Learning is present in the text as the paper discusses the use of rule-based systems to represent and integrate different strategies.
Neural Networks, Reinforcement Learning  Explanation:  This paper belongs to the sub-category of Neural Networks because it proposes a network of polynomial controllers that can be trained using backpropagation. The paper also mentions the use of activation functions and hidden layers, which are common components of neural networks.  Additionally, the paper also belongs to the sub-category of Reinforcement Learning because it discusses the use of a reward function to train the network. The paper mentions the use of a cost function that penalizes the network for deviating from the desired output, which is a common approach in reinforcement learning. The paper also mentions the use of a learning rate, which is a common parameter in reinforcement learning algorithms.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper applies machine learning algorithms to learning problems formulated as regression or classification tasks. These algorithms are probabilistic in nature, as they use statistical models to make predictions based on the available data.  Rule Learning: The paper also discusses the use of propositional machine learning algorithms, which are based on the extraction of rules from the available data. These rules can be used to make predictions or to identify patterns in the data. The algorithm for Relational Regression is an example of a rule learning algorithm, as it utilizes all the information contained in the relations of the database to make predictions.
Probabilistic Methods.   Explanation: The paper discusses the use of Gaussian process priors for regression, which is a probabilistic method. The authors also mention Bayesian inference, which is another probabilistic method. The paper does not discuss any other sub-categories of AI.
Genetic Algorithms.   Explanation: The paper presents a prototype learning system that uses a genetic algorithm to evolve the number and positions of prototypes for each class. The use of genetic algorithms is a key aspect of the system and is mentioned throughout the paper. The other sub-categories of AI are not directly relevant to the content of the paper.
Theory.   Explanation: The paper focuses on the theoretical problem of asymptotic identification for a class of fading memory systems in the presence of bounded noise. The methods and results presented are based on theoretical analysis and do not involve the use of specific AI techniques such as neural networks or genetic algorithms.
Neural Networks, Probabilistic Methods, Rule Learning.   Neural Networks: The paper describes Rapture, a system that uses a modified version of backpropagation, a neural network learning algorithm, to refine the certainty factors of a probabilistic rule base.   Probabilistic Methods: Rapture is a system for revising probabilistic knowledge bases. The paper also mentions using ID3's information-gain heuristic to add new rules.   Rule Learning: The paper describes Rapture's approach to refining certainty-factor rule bases, using a combination of connectionist and symbolic learning methods. The system uses backpropagation to refine existing rules and ID3's information-gain heuristic to add new rules.
Reinforcement Learning, Probabilistic Methods, Rule Learning  Reinforcement Learning is present in the paper as the approach taken is to treat learning-strategy selection as a separate planning problem with its own set of goals, similar to the goal-management problems associated with traditional planning systems.  Probabilistic Methods are present in the paper as the authors explore some issues, problems, and possible solutions in a framework where learning-strategy selection is treated as a separate planning problem with its own set of goals.  Rule Learning is present in the paper as the authors present examples from a multistrategy learning system called Meta-AQUA, which is a rule-based system.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of statistical mechanics to model the volatility of Eurodollar futures as a stochastic process, which is a probabilistic approach.   Theory: The paper presents a theoretical framework for modeling the volatility of financial markets, specifically the Eurodollar futures market, using a statistical mechanics of financial markets (SMFM) approach. The paper also discusses the need for a generalization of the standard Black-Scholes model to account for the stochastic nature of volatility.
Probabilistic Methods.   Explanation: The paper discusses plausibility measures as a new approach to modeling uncertainty, which is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. The paper also examines the algebraic properties of plausibility measures, which are analogues to the use of + and fi in probability theory. Therefore, the paper belongs to the sub-category of AI known as Probabilistic Methods.
Probabilistic Methods, Case Based  Explanation:   Probabilistic Methods: The paper mentions the use of "statistical and probabilistic techniques" in combination with temporal abstractions to analyze the data from home monitoring of diabetic patients. This indicates the use of probabilistic methods in the pre-processing and analysis of the data.  Case Based: The paper also describes how Intelligent Data Analysis methods may be used to index past cases and perform a case-based retrieval in a database of past cases. This indicates the use of case-based reasoning in the analysis of the data.
Probabilistic Methods, Case Based  Explanation:   Probabilistic Methods: The Diverse Density framework described in the paper is based on probabilistic methods. It uses a probabilistic model to estimate the probability of a bag being positive or negative based on the instances it contains.  Case Based: The paper describes the application of the Diverse Density framework to learn a simple description of a person from a series of images (bags) containing that person. This is an example of a case-based approach, where the system learns from specific instances (bags) rather than general rules.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper discusses the use of maximum likelihood estimation to estimate the smoothing parameter in smoothing splines. This is a probabilistic method as it involves modeling the likelihood of the data given the smoothing parameter.  Theory: The paper presents a theoretical framework for generalized approximate cross-validation for smoothing splines with non-Gaussian data. It discusses the properties of the method and provides mathematical proofs for its effectiveness.
Rule Learning, Theory.   Explanation:  This paper belongs to the sub-category of Rule Learning because it discusses the use of restrictions on the number and depth of existential variables in ILP, which is a form of rule learning. It also belongs to the sub-category of Theory because it presents lower bounds on the complexity of hypothesis spaces in ILP and proposes alternative approaches for reducing the hypothesis space.
Reinforcement Learning, Theory.   Reinforcement learning is present in the text as the experiments described involve a program learning to traverse state spaces through trial and error, receiving feedback in the form of rewards or penalties.   Theory is also present in the text as the paper discusses the relationship between learning and forgetting, and analyzes the economics of learning. The paper also proposes the idea that knowledge can sometimes have a negative value and that research into knowledge acquisition should take seriously the possibility that knowledge may sometimes be harmful. The paper concludes that learning and forgetting are complementary processes which construct and maintain useful representations of experience.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper applies a probabilistic framework for comparing different bases objectively by calculating their probability given the observed data or by measuring the entropy of the basis function coefficients.   Neural Networks: The paper applies a general technique for learning overcomplete bases to the problem of finding efficient image codes. The learned bases are Gabor-like in structure and higher degrees of overcompleteness produce greater sampling density in position, orientation, and scale. The paper also demonstrates the improvement in the representation of the learned bases by showing superior performance in image denoising and filling-in of missing pixels.
Neural Networks.   Explanation: The paper discusses the use of a finite neural network to simulate a universal Turing machine. It does not mention any other sub-categories of AI such as case-based reasoning, genetic algorithms, probabilistic methods, reinforcement learning, or rule learning.
Genetic Algorithms, Theory.  Explanation:  1. Genetic Algorithms: The paper proposes a Genetic Programming approach to estimate the Kolmogorov complexity of binary strings. This approach involves evolving a population of Lisp programs to find the optimal program that generates a given string. This is a classic example of using genetic algorithms to solve a problem. 2. Theory: The paper deals with the problem of Kolmogorov complexity, which is a theoretical concept in computer science. The authors propose a new approach to estimate this complexity, which is based on the theory of genetic programming. The paper also discusses the limitations of existing methods and the advantages of the proposed approach. Therefore, the paper belongs to the sub-category of Theory.
Rule Learning.   Explanation: The paper focuses on the use of a specific type of blocking process in the context of decision tree learning, which is a type of rule learning. The authors discuss how this model can be extended to deal with other hypothesis classes, but the main focus is on decision trees. The other sub-categories of AI listed in the question (Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning) are not directly relevant to the content of the paper.
Case Based, Reinforcement Learning  Explanation:  - Case Based: The paper proposes a case-based method of selecting behavior sets for the new system, ACBARR. The system is designed to provide more flexible performance in novel environments by using past experiences (cases) to inform its behavior.  - Reinforcement Learning: The paper discusses how the new system, ACBARR, overcomes a standard "hard" problem for reactive systems, the box canyon, by using reinforcement learning. The system learns from its experiences and adjusts its behavior accordingly to avoid getting stuck in a dead-end situation.
Genetic Algorithms, Rule Learning.   Genetic Algorithms: The paper introduces a new algorithm called SET-Gen that uses genetic search to select the set of input features C4.5 is allowed to use to build its decision tree. This is a clear example of the use of genetic algorithms in the context of machine learning.  Rule Learning: The paper focuses on improving the comprehensibility of decision trees grown by standard C4.5 without reducing accuracy. This is achieved by selecting a smaller set of input features to build the tree, which can be seen as a form of rule learning. Additionally, the paper discusses the statistical significance of the results obtained, which is a common practice in rule learning research.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses a method for automatically determining the structure and connection weights of a Boltzmann machine based on a Bayesian network representation of a probability distribution. The mapping from a Bayesian network to a Boltzmann machine is a probabilistic method for incorporating a priori information into a neural network architecture.  Neural Networks: The paper discusses the use of Boltzmann machines, which are a type of neural network, for approximating a Gibbs sampling process of a Bayesian network. The resulting Boltzmann machine structure can be implemented efficiently on massively parallel hardware, and can be trained further with existing learning algorithms.
Probabilistic Methods. This is evident from the references to papers on constructing consistent extensions of partially oriented graphs, deciding morality of graphs, and identifying independence in Bayesian networks. These are all topics related to probabilistic graphical models and inference.
Probabilistic Methods, Rule Learning, Theory.   Probabilistic Methods: The paper models the task of classifying incomplete examples using a probabilistic process.   Rule Learning: The paper addresses the task of learning accurate default concepts from random training examples.   Theory: The paper extends Valiant's pac-learning framework to this context and obtains a number of useful learnability results.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper introduces a probabilistic feedforward neural network as a Bayesian CBR system, with one of the layers representing the cases. The MDL learning algorithm is used to obtain the proper network structure with the associated conditional probabilities.   Probabilistic Methods: The paper discusses the use of a rigorous Bayesian probability propagation algorithm for CBR reasoning in discrete domains. The MDL learning algorithm is also based on probabilistic principles.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper introduces a neural network model that maximizes the Sharpe Ratio. The output of the model is the position size between a risky and a risk-free asset. The iterative parameter update rules are derived and compared to alternative approaches.  Reinforcement Learning: The paper discusses a trading strategy that is evaluated and analyzed on both computer-generated data and real-world data. The goal is to optimize out-of-sample risk-adjusted profit, which can be achieved with this nonlinear approach. This involves learning from experience and adjusting the strategy based on the feedback received.
Genetic Algorithms, Neural Networks, Reinforcement Learning.   Genetic Algorithms: The paper discusses the use of evolutionary algorithms for the automated design of neural network architectures. This involves the use of genetic operators such as mutation and crossover to generate new network structures.  Neural Networks: The paper focuses on the design of a neuro-controller for a robotic bulldozer using evolutionary algorithms. The resulting networks are analyzed to understand how evolution exploits the design constraints and properties of the environment to produce high fitness structures.  Reinforcement Learning: The robotic bulldozer is given the task of clearing an arena littered with boxes by pushing them to the sides. This is a classic example of a reinforcement learning problem, where the robot must learn to take actions that maximize a reward signal (in this case, the number of boxes cleared). The use of evolutionary algorithms to design the neuro-controller can be seen as a form of reinforcement learning, where the fitness function serves as the reward signal.
This paper belongs to the sub-category of Genetic Algorithms.   Explanation: The paper describes the use of an evolutionary algorithm, which is a type of genetic algorithm, to solve coloring problems in graphs. The authors embed a sequential procedure within the evolutionary algorithm to improve its performance. The paper discusses the use of mutation and crossover operators, which are common components of genetic algorithms. Therefore, the paper is most related to the sub-category of Genetic Algorithms.
Theory  Explanation: The paper presents a framework for defining and combining similarity measures using lattice-valued functions. It does not focus on any specific AI sub-category such as case-based reasoning, neural networks, or reinforcement learning. Instead, it provides a theoretical approach to similarity measures that can be applied across different AI sub-categories.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses generating the most preferred feasible configuration by posing preference queries to the user. This involves reasoning about the user's preferences while taking into account constraints over the set of feasible configurations. The algorithm is designed to minimize the number and complexity of preference queries posed to the user, which suggests a probabilistic approach to decision making.  Rule Learning: The paper assumes that the user can structure their preferences in a particular way that can be exploited during the optimization process. This suggests a rule-based approach to preference elicitation and decision making. The paper also addresses the trade-offs between computational effort and the degree of interaction with the user, which further emphasizes the importance of rule learning in this context.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the use of neural networks as a classifier and how it can be improved by combining supervised and unsupervised learning. The authors also mention the use of backpropagation, which is a common technique used in neural networks.  Probabilistic Methods: The paper discusses the use of probability theory in the context of combining supervised and unsupervised learning. The authors mention the use of a probabilistic model to estimate the posterior probability of a class given the input data. They also discuss the use of the Expectation-Maximization algorithm, which is a probabilistic method used to estimate the parameters of a model.
Case Based, Reinforcement Learning.   Case Based: The paper discusses the selection of input examples based on performance failure, which is a form of case-based reasoning.   Reinforcement Learning: The paper discusses the paradigm of failure-driven processing, which is a form of reinforcement learning where the system learns from its mistakes and failures. The paper also discusses the degrees of freedom in failure-driven learning compared to success-driven learning, which is a key concept in reinforcement learning.
Neural Networks.   Explanation: The paper discusses the application of the Gamma MLP, which is a type of neural network, to a speech phoneme recognition problem. The paper also compares the performance of the Gamma MLP to other neural network architectures, such as the TDNN and IIR MLP. The paper does not discuss any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper compares the performance of the local linear algorithm with that of five layer auto-associative networks.   Probabilistic Methods: The paper introduces a new distortion measure for clustering, which is used in the local linear algorithm. The use of clustering and PCA also suggests a probabilistic approach to dimension reduction.
Probabilistic Methods, Rule Learning  The paper belongs to the sub-categories of Probabilistic Methods and Rule Learning.   Probabilistic Methods: The paper uses probabilistic methods to model the uncertainty in gene parsing. It proposes a non-deterministic, constraint-based parsing approach that assigns probabilities to different parse trees based on the likelihood of the constraints being satisfied. The paper also discusses the use of Bayesian networks to model the dependencies between different constraints.  Rule Learning: The paper also uses rule learning to generate the constraints used in the parsing approach. It describes a method for automatically learning rules from a corpus of annotated gene sequences. The learned rules are used to generate constraints that capture the syntactic and semantic properties of gene sequences.
Theory  Explanation: The paper discusses theoretical concepts related to approximation by scattered shifts of a basis function, and compares different methods for localizing these translates. There is no mention of any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods, Rule Learning  The paper belongs to the sub-category of Probabilistic Methods because it discusses the use of Bayes' theorem in pattern recognition. The authors propose an optimum decision rule based on the posterior probabilities of the classes given the input pattern. They also discuss the use of prior probabilities and the likelihood function in computing the posterior probabilities.  The paper also belongs to the sub-category of Rule Learning because it proposes a decision rule that is based on a set of rules. The authors use a set of decision rules to classify the input pattern into one of the classes. They also discuss the use of decision trees and rule induction algorithms in generating decision rules.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the use of probabilistic models to identify protein coding regions in genomic DNA. Specifically, it mentions the use of Hidden Markov Models (HMMs) and their ability to model the probability of observing a sequence of nucleotides given a particular state (coding or non-coding).   Rule Learning: The paper also discusses the use of rule-based methods to identify protein coding regions. Specifically, it mentions the use of decision trees and their ability to learn rules from training data to classify sequences as coding or non-coding.
Probabilistic Methods, Theory.  Probabilistic Methods: The paper discusses the use of wavelets in nonparametric regression and local spectral density estimation, which involve probabilistic methods.  Theory: The paper reviews the basics of the discrete wavelet transform and describes the construction of an inverse of the stationary wavelet transform, which are theoretical concepts. The paper also discusses the potential use of the stationary wavelet transform as an exploratory statistical method.
Neural Networks, Theory.   Neural Networks: The paper proposes a neural model of the cortical representation of egocentric distance. The model is based on the idea that the brain uses a combination of visual and motor information to estimate distance. The authors use a neural network to simulate the behavior of the model and show that it can accurately predict the perceived distance of objects in different visual contexts.  Theory: The paper presents a theoretical framework for understanding how the brain represents egocentric distance. The authors review previous research on the topic and propose a new model that integrates visual and motor information. They also discuss the implications of their model for understanding the neural basis of perception and action.
Rule Learning, Probabilistic Methods.   Rule Learning is present in the text as the paper advocates for the use of decision table classifiers, which are a type of rule-based classifier. The paper describes several algorithms for learning decision tables and compares their performance.   Probabilistic Methods are also present in the text as the paper mentions Bayesian networks as one of the hypothesis spaces that machine learning researchers have concentrated on. However, the paper argues that decision table classifiers are more comprehensible for business users compared to these foreign hypothesis spaces.
Probabilistic Methods.   Explanation: The paper describes the development of hierarchical time series models for analyzing hospital quality monitors. These models are probabilistic in nature, as they involve the estimation of probability distributions for various parameters and the use of Bayesian inference methods. The paper discusses the use of Markov Chain Monte Carlo (MCMC) algorithms for fitting these models, which are a common tool in probabilistic modeling. There is no mention of any other sub-category of AI in the text.
Neural Networks.   Explanation: The paper discusses the development of a high-performance system for neural network applications and presents performance comparisons on neural network backpropagation training. The focus of the paper is on the implementation of a vector microprocessor for neural network processing, indicating that the paper belongs to the sub-category of AI related to neural networks.
Theory.   Explanation: The paper discusses the challenge of revising a rule-based theory, specifically addressing the impact of impure elements such as the Prolog cut and not() operators. The paper does not discuss case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Case Based, Rule Learning  Explanation:  - Case Based: The paper discusses computational models of precedent-based legal reasoning, which involves using past cases as a basis for decision-making. This falls under the category of case-based reasoning in AI. - Rule Learning: The paper discusses the modeling of the selection and construction of arguments based on pairwise case comparison and multiple-precedent arguments, which involves learning rules for argument construction.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses the use of neural networks for financial forecasting and introduces a new intelligent signal processing method that uses a self-organizing map and recurrent neural networks. The method is applied to the prediction of daily foreign exchange rates, and the paper discusses the limitations and difficulties of using neural networks for processing high noise, small sample size signals.  Rule Learning: The paper discusses the extraction of symbolic knowledge from the recurrent neural networks in the form of deterministic finite state automata. These automata explain the operation of the system and are often relatively simple. Rules related to well-known behavior such as trend following and mean reversal are extracted.
Rule Learning, Theory.   Explanation:   The paper belongs to the sub-category of Rule Learning because it proposes a method for automating the selection of the appropriate model class using a set of heuristic rules. These rules determine which model class is the most appropriate for a given learning task based on explicit conditions.   The paper also belongs to the sub-category of Theory because it discusses the problem of selecting the appropriate learning algorithm for a given task and proposes a solution that involves combining different model classes and dynamically selecting the most appropriate one. The paper also describes how the proposed approach will be evaluated to demonstrate its efficiency and effectiveness.
Theory.   Explanation: This paper belongs to the sub-category of AI called Theory because it discusses the nature of interactions and factors involved in the development of the nervous system, specifically in the withdrawal of polyinnervation in developing muscle. It does not involve the use of any specific AI techniques such as neural networks or reinforcement learning.
Rule Learning, Case Based.   Rule Learning is present in the text as the paper presents a new approach to inductive learning that combines aspects of instance-based learning and rule induction in a single simple algorithm. The RISE system searches for rules in a specific-to-general fashion, starting with one rule per training example.   Case Based is also present in the text as the RISE system uses a best-match strategy for classification, which reduces to nearest-neighbor if all generalizations of instances were rejected. This is a characteristic of case-based reasoning.
Neural Networks, Reinforcement Learning.   Neural Networks: The paper discusses machine learning and binary classification, which are both subfields of AI that heavily rely on neural networks. The invariance approach mentioned in the paper also involves learning a model, which can be achieved through neural networks.  Reinforcement Learning: Although the paper does not explicitly mention reinforcement learning, the lifelong learning framework discussed in the paper involves encountering a multitude of related learning tasks over time, which is similar to the concept of reinforcement learning where an agent learns from its environment through trial and error. Additionally, the invariance approach involves biasing subsequent learning, which can be seen as a form of reinforcement learning.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper presents an approach, MERLIN 2.0, which uses a technique for inducing Hidden Markov Models from positive sequences only. This technique is a probabilistic method as it involves modeling the probability distribution of the sequences.  Rule Learning: The paper discusses predicate invention, which is a subfield of rule learning. The approach presented, MERLIN 2.0, is a system for guiding predicate invention by sequences of input clauses in SLD-refutations of positive and negative examples w.r.t. an overly general theory. The paper also compares MERLIN 2.0 with the positive only learning framework of Progol 4.2, which is another system for rule learning.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses recent ideas on utility and probability, which are key concepts in probabilistic methods. The author talks about the use of probability distributions to model uncertainty and decision-making, and how these can be used to optimize utility.   Theory: The paper is focused on discussing theoretical ideas and concepts related to utility and probability. The author presents various theoretical frameworks and models for understanding these concepts, and discusses their implications for AI research and development.
Rule Learning.   Explanation: The paper presents algorithms for learning certain classes of function-free recursive logic programs from equivalence queries. These algorithms are based on the concept of rule learning, where the goal is to learn a set of rules that can be used to make predictions or decisions. The paper discusses the learnability of specific classes of recursive logic programs, which can be seen as a form of rule learning.
Probabilistic Methods, Rule Learning  Probabilistic Methods: The paper proposes a scheme to compute smoothing spline ANOVA estimates for large data sets with a (near) tensor-product structure. The scheme combines backfitting algorithm with iterative imputation algorithm in order to save both computational space and time. The convergence of this algorithm and various ways to further speed it up, such as collapsing component functions and successive over-relaxation, are discussed. These are all probabilistic methods used in the paper.  Rule Learning: The paper discusses issues related to the application of the proposed scheme in spatial-temporal analysis. An application to a global analysis of historical surface temperature data is described. These are examples of rule learning in the paper.
Rule Learning, Theory.   Explanation: The paper presents and evaluates two methods for improving the performance of ILP systems, which are based on rule learning and theoretical considerations. The discretization technique is a method for handling numerical attributes in relational learning problems, which is a common task in ILP. The lookahead technique is a method for assessing the quality of a refinement without knowing which refinements will be enabled afterwards, which is a theoretical problem in ILP. Therefore, the paper belongs to the sub-categories of Rule Learning and Theory in AI.
Probabilistic Methods.   Explanation: The paper discusses a particle filter algorithm that uses Bayesian calculations to filter time series data. The authors address issues with the algorithm's design and implementation, and introduce new methods for on-line Bayesian calculations and maximum likelihood estimation. These are all examples of probabilistic methods in AI, which involve using probability theory to model uncertainty and make predictions or decisions based on uncertain data.
Rule Learning, Theory.   The paper discusses the use of determinations, which are a type of rule-based representation of knowledge. The ConDet algorithm is used to learn these rules from training data, which falls under the category of rule learning. The paper also discusses the theoretical aspects of determinations and their use in prediction.
Neural Networks.   Explanation: The paper focuses on the task of grammatical inference with recurrent neural networks, investigating the properties of different types of recurrent networks in this setting. The paper does not discuss other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Neural Networks. This paper belongs to the sub-category of Neural Networks. The paper proposes a new architecture for predictive models for financial data based on neural networks. The architecture includes an interaction output layer and a new internal preprocessing layer connected with a diagonal matrix of positive weights to a layer of squashing functions. The paper applies these ideas to the real-world example of daily predictions of the German stock index DAX and compares the results to a network with a single output. The architectures are compared from both the training perspective and the trading perspective.
Probabilistic Methods, Theory.   The paper proposes a new method for estimation in linear models that involves minimizing the residual sum of squares subject to a constraint on the sum of the absolute value of the coefficients. This approach is probabilistic in nature, as it involves minimizing a statistical measure of error. The paper also discusses the theoretical properties of the method, including its relationship to other approaches like subset selection and ridge regression.
Case Based, Theory  Explanation:  This paper belongs to the sub-category of Case Based AI because it proposes new distance functions for instance-based learning, which is a type of case-based reasoning. It also belongs to the sub-category of Theory because it presents a new theoretical framework for handling nominal and continuous attributes in distance functions.
Genetic Algorithms.   Explanation: The paper discusses a genetic programming system and the use of non-coding segments (introns) in genetic-based encodings. It also proposes a method for duplicating coding segments in repaired chromosomes to improve learning rate. These are all characteristics of genetic algorithms, which use evolutionary principles to optimize solutions.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper proposes a modular spatio-temporal connectionist network (MSTCN) for recognizing handwritten digit strings. The network consists of multiple layers of neurons that are trained using backpropagation algorithm.   Probabilistic Methods: The paper uses a probabilistic approach to recognize digit strings by modeling the probability distribution of the input data. The authors use a Hidden Markov Model (HMM) to model the temporal dependencies between the digits in a string. The HMM is trained using the Baum-Welch algorithm, which is a probabilistic method for estimating the parameters of a hidden Markov model.
Genetic Algorithms.   Explanation: The paper discusses the extension of Holland's genetic algorithm to the task of automatic programming through genetic programming. It describes the evolution of computer programs through the use of genetic operators such as mutation and crossover. The paper also mentions the identification of primitive functions and terminals, fitness measures, and control parameters, which are all key components of genetic programming. There is no mention of other sub-categories of AI such as Case Based, Neural Networks, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
Neural Networks.   Explanation: The paper explores the effects of adding inertia to a continuous-time, Hopfield effective neuron system, which is a type of neural network. The paper also uses Lyapunov exponents, power spectra, and phase space plots to confirm the presence of chaos in the system, which are commonly used tools in the analysis of neural networks.
Rule Learning, Incremental Learning.   The paper describes a method for incremental learning based on the AQ15c inductive learning system, which is a rule-based learning system. The method maintains a representative set of past training examples to modify the currently held hypotheses. This is an example of incremental learning, which is a subcategory of machine learning that involves updating the model as new data becomes available.
Probabilistic Methods, Rule Learning  Explanation:   Probabilistic Methods: The paper discusses the use of fuzzy logic, which is a probabilistic approach to reasoning under uncertainty. Specifically, the paper proposes a method for adapting and pruning the min-max fuzzy inference and estimation system using probability-based measures.  Rule Learning: The paper also discusses the use of rules in the fuzzy inference system. The authors propose a method for pruning the rule base by removing redundant and irrelevant rules, which is a form of rule learning. Additionally, the paper discusses the use of a genetic algorithm for optimizing the rule base, which is another form of rule learning.
Genetic Algorithms - This paper specifically focuses on studying the effect of non-coding segments on GA performance. The GA is a type of evolutionary algorithm that is modelled after the process of natural selection. The paper discusses hypotheses and experiments related to the GA and non-coding segments, which are a specific aspect of the GA.
Case Based, Reinforcement Learning  Explanation:   - Case Based: The title of the paper explicitly mentions "Case-based Acquisition" and the abstract mentions "complicated user optimization criteria" being used to guide solution improvement. This suggests that the approach involves learning from past cases and using that knowledge to inform future decisions. - Reinforcement Learning: While the abstract does not explicitly mention reinforcement learning, the phrase "use them to guide solution improvement" suggests that the approach involves iteratively improving solutions based on feedback from the user. This is a key characteristic of reinforcement learning, where an agent learns to take actions that maximize a reward signal.
Probabilistic Methods, Neural Networks  Probabilistic Methods: The paper discusses the use of Bayesian models to represent spatial attention and how they can be used to predict behavior in various tasks.  Neural Networks: The paper also discusses the use of neural network models to simulate spatial attention and how they can be used to predict behavior in various tasks. The authors also mention the use of deep learning techniques in some recent studies.
Theory  Explanation: This paper belongs to the sub-category of AI theory as it discusses different theories of human concept learning and proposes a new exemplar model that is more flexible than previous models. The paper does not discuss any specific AI techniques such as neural networks or reinforcement learning.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses the use of generalized linear models (GLMs) and their associated likelihood ratio tests for hypothesis testing. GLMs are a type of probabilistic model that assumes a probability distribution for the response variable. The likelihood ratio test is a probabilistic method used to compare the fit of two models, one of which is a null model.  Theory: The paper presents a theoretical framework for testing the null hypothesis in GLMs and proposes alternative `smooth' hypotheses. The authors derive the distribution of the test statistic under the null hypothesis and provide a method for calculating p-values. The paper also discusses the implications of the results for model selection and inference.
Probabilistic Methods.   Explanation: The paper presents a Bayesian heuristic for finding the most probable hypothesis in a framework for learning from noisy data and fixed example size. The approach evaluates a hypothesis as a whole rather than one clause at a time, and the heuristic is incorporated in an ILP system called Lime. The paper also discusses the theoretical properties of the Bayesian approach and presents experimental results comparing Lime to other ILP systems.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper presents a formulation for a network of stochastic directional units, where the state of each unit is described by a complex variable and associated with a probability distribution (von Mises distribution). The paper also associates a quadratic energy function with each configuration, which is a common approach in probabilistic modeling.  Neural Networks: The Directional-Unit Boltzmann Machine (DUBM) presented in the paper is an extension of the Boltzmann machine, which is a type of neural network. The paper describes the weights as complex variables and presents a mean-field approximation to the DUBM, which is a common technique in neural network modeling. The paper also presents a learning algorithm and simulations that demonstrate the DUBM's ability to learn mappings, which is a common application of neural networks.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses a computational model of visual perception and selective attention called morsel, which is a neural network model. The model is used to simulate the pattern of results in line bisection tests for patients with unilateral neglect.  Probabilistic Methods: The paper mentions that morsel has already been used to model data in a related disorder, neglect dyslexia. This suggests that the model uses probabilistic methods to make predictions about the behavior of patients with neurological disorders.
Neural Networks, Rule Learning.   Neural Networks: The ASOCS approach is based on an adaptive network composed of many simple computing elements which operate in a parallel asynchronous fashion.   Rule Learning: Problem specification is given to the system by presenting if-then rules in the form of boolean conjunctions. Rules are added incrementally and the system adapts to the changing rule-base.
Genetic Algorithms, Rule Learning.   Genetic Algorithms are present in the text as the authors use them to tune the parameters of the fuzzy controller. They state that the genetic algorithm is used to adjust the scaling factors and membership functions in a sequential order of significance.   Rule Learning is present in the text as the authors describe the synthesis of a fuzzy controller for tracking the velocity profile. Fuzzy logic is a form of rule-based reasoning, where the rules are expressed in terms of fuzzy sets and membership functions. The authors use a genetic algorithm to tune the parameters of the fuzzy controller, which involves adjusting the scaling factors and membership functions to improve its performance.
This paper belongs to the sub-categories of AI: Rule Learning, Case Based.   Rule Learning is present in the paper as the authors propose a rule-based approach to pronouncing names. They use a set of rules to determine the pronunciation of a name based on its spelling and linguistic characteristics.   Case Based reasoning is also present in the paper as the authors use a case-based approach to handle exceptions to the rules. They use a database of previously encountered names and their pronunciations to determine the pronunciation of new names that do not follow the rules.
Neural Networks.   Explanation: The paper describes a patient-adaptive neural network algorithm for ECG patient monitoring. The algorithm was compared with a baseline algorithm and found to significantly improve the classification of normal vs. ventricular beats. The use of a neural network is central to the algorithm's ability to adapt to individual patients and improve classification accuracy.
Case Based, Rule Learning.   Case-based reasoning is mentioned in the abstract as a component of Anapron, which combines rule-based and case-based reasoning. Rule learning is also mentioned in the abstract as the basis for Anapron's set of rules adapted from MITalk and elementary foreign-language textbooks.
Rule Learning, Theory.   Explanation:  The paper discusses the problem of learning to predict ordinal classes in an ILP (Inductive Logic Programming) setting. It starts with a relational regression algorithm named SRT (Structural Regression Trees) and explores various ways of transforming it into a first-order learner for ordinal classification tasks. The paper compares combinations of these algorithm variants with several data preprocessing methods on two ILP benchmark data sets to study the trade-off between optimal categorical classification accuracy (hit rate) and minimum distance-based error. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, or Reinforcement Learning. Therefore, the most related sub-category is Rule Learning, which involves learning rules from data, and Theory, which involves the study of principles and concepts underlying AI algorithms.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses learning algorithms for text categorization that map text to a high-dimensional feature space. The algorithms are based on probabilistic methods that learn a linear separator in the feature space.   Rule Learning: The paper discusses mistake-driven learning algorithms for text categorization that learn a linear separator in the feature space. The algorithms are based on rule learning, where the rules are represented as linear separators in the feature space. The paper also discusses modifications to the algorithms to better address the specific characteristics of the text processing domain.
Neural Networks.   Explanation: The paper discusses the implementation of sigmoidal neural nets using networks of spiking neurons, and explores the potential for these networks to be "universal approximators" for continuous functions. The paper also discusses the implications of this approach for learning rules in neural networks. These topics are all related to the subfield of AI known as neural networks.
Probabilistic Methods.   Explanation: The paper describes a Bayesian approach to estimating parameters in a functional response model, which involves constructing an estimator of a nonlinear response function from a set of functional units. The Bayesian approach involves obtaining a sample representing the Bayesian posterior distribution using the Markov chain Monte Carlo procedure, which combines Gibbs and Metropolis-Hastings algorithms. This approach is suitable for finding Bayes-optimal values of parameters in a complicated parameter space. The paper does not mention any other sub-categories of AI.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper presents a model of information processing in primate retinal cone pathways that is based on a neural network architecture. The model includes multiple layers of processing units that are interconnected in a way that mimics the organization of the retina.   Probabilistic Methods: The paper also uses probabilistic methods to model the variability in the responses of individual cone cells and the noise in the visual signal. The authors use a Bayesian framework to estimate the parameters of the model and to make predictions about the behavior of the system under different conditions.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper uses radial basis function networks to examine the impact of feature subsets on classifier accuracy and complexity.   Probabilistic Methods: The paper proposes a feature weighting approach based on binary feature weights and continuous weights, which gives detailed information about feature relevance. This approach can be seen as a probabilistic method for feature selection.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper introduces a hierarchical Bayes model for bias learning, which is a probabilistic method.  Theory: The paper presents theoretical models of bias learning and discusses the main theoretical results.
Genetic Algorithms, Neural Networks, Reinforcement Learning.   Genetic Algorithms: The paper discusses the use of evolutionary algorithms to optimize the encoding schemes for artificial neural networks.   Neural Networks: The paper compares the efficiency of two encoding schemes for artificial neural networks.   Reinforcement Learning: The paper discusses the use of a controller to balance poles attached to a cart, which is a classic problem in reinforcement learning. The new fitness function introduced in the paper also forces the neural network to compute the velocity, which is a key component in many reinforcement learning algorithms.
Neural Networks.   Explanation: The paper discusses the use of neural networks, specifically models labeled as neural networks, which are made up of many simple nodes that are highly interconnected. The paper compares and contrasts standard neural network learning algorithms with those proposed using digital nodes, which can lead to vastly improved efficiency for many applications. The paper does not discuss any other sub-categories of AI.
Case Based  Explanation: The paper focuses on the case-based explanation model, which is a sub-category of AI that involves using specific explanations of prior episodes to construct and select abductive hypotheses.
Probabilistic Methods.   Explanation: The paper discusses a method for computing class probabilities using error-correcting output coding (ECOC) and formulating an over-constrained system of linear equations to solve for these probabilities. This approach falls under the category of probabilistic methods in AI, which involve modeling uncertainty and probability distributions in decision-making processes.
Reinforcement Learning, Theory.  Reinforcement Learning is the primary sub-category of AI that this paper belongs to. The paper discusses TD() learning, which is a type of reinforcement learning algorithm. The paper explores the generalization capabilities of TD() learning, which is a key aspect of reinforcement learning.  Theory is another sub-category of AI that this paper belongs to. The paper presents a theoretical analysis of TD() learning and its generalization capabilities. The authors derive bounds on the generalization error of TD() learning and provide insights into the conditions under which TD() learning can generalize well.
Reinforcement Learning.   Explanation: The paper proposes a reactive critic for reinforcement learning, which is used to improve the control strategy. The paper investigates the relation between the parameters and the resulting approximations of the critic, and demonstrates how the reactive critic responds to changing situations. None of the other sub-categories of AI are mentioned or used in the paper.
Probabilistic Methods, Theory.   Probabilistic Methods: The paper discusses Quadratic Dynamical Systems (QDS), which are probabilistic models used to represent phenomena in various fields. The paper also talks about the convergence of QDS to a stationary distribution, which is a probabilistic concept.  Theory: The paper presents a theoretical result that the sampling problem for QDS is PSPACE-hard, which is a complexity theory result. The paper also discusses the complexity of QDS compared to Markov chains, which is a theoretical analysis.
Neural Networks.   Explanation: The paper analyzes the dynamics of effective neurons that model Hopfield neural networks, and uses a technique to drive these nonlinear oscillators to resonance. The paper does not discuss any other sub-categories of AI such as case-based reasoning, genetic algorithms, probabilistic methods, reinforcement learning, or rule learning.
Neural Networks, Reinforcement Learning, Theory.  Neural Networks: The paper presents a computational model of motor end-plate morphogenesis that is based on recent neurophysiological evidence.  Reinforcement Learning: The paper discusses the role of activity in synaptic competition and how it affects synaptic efficacy and survival.  Theory: The paper presents an extended version of the dual constraint model of motor end-plate morphogenesis and justifies it at the molecular level. It also makes predictions that match the developmental and regenerative behaviour of real synapses.
Theory.   Explanation: The paper discusses a theoretical approach to learning functions over a fixed distribution, and does not involve any specific implementation or application of AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, or reinforcement learning. The paper presents a generalized version of the Kushilevitz-Mansour algorithm using representations of finite groups, and introduces new classes of functions that can be learned using this approach. Therefore, the paper belongs to the sub-category of AI theory.
Probabilistic Methods.   Explanation: The paper discusses Bayesian model averaging (BMA), which is a probabilistic method for accounting for model uncertainty. The paper also provides examples of how BMA improves predictive performance and a catalogue of currently available BMA software.
Rule Learning, Case Based.   Rule Learning is present in the text as the paper discusses the refinement of knowledge-based systems, which are typically rule-based systems. The paper proposes the use of explanation to assist in the refinement of these systems.   Case Based is also present in the text as the paper discusses the use of past cases to assist in the refinement of knowledge-based systems. The paper proposes the use of explanation to help users understand how past cases were used to refine the system and how they can use this information to further refine the system.
Rule Learning.   Explanation: The paper describes the EMERALD system, which integrates five programs that exhibit different types of machine learning and discovery, including "learning rules from examples." The paper also mentions that each program is presented as a "learning robot," which has its own "personality," expressed by its icon, its voice, the comments it generates during the learning process, and the results of learning presented as natural language text and/or voice output. This suggests that the focus of the paper is on rule-based learning methods.
Genetic Algorithms, Probabilistic Methods.   Genetic Algorithms are present in the text as the paper discusses the use of automated design optimization methods for exhaust nozzle design. Genetic Algorithms are a type of optimization algorithm that mimics the process of natural selection to find the best solution to a problem.   Probabilistic Methods are also present in the text as the paper discusses exploring fundamental research issues that arise in the application of automated design optimization methods to realistic engineering problems. Probabilistic Methods are used in optimization algorithms to model uncertainty and randomness in the problem being solved.
Genetic Algorithms, Theory.   Genetic Algorithms: The paper proposes an evolutionary heuristic approach to solve the Minimum Vertex Cover Problem. The approach is based on a genetic algorithm that uses a population of candidate solutions and iteratively applies selection, crossover, and mutation operators to generate new solutions. The authors also discuss the implementation details of the genetic algorithm, such as the encoding of solutions, the fitness function, and the parameters of the algorithm.  Theory: The paper presents a theoretical analysis of the proposed evolutionary heuristic approach. The authors prove that the algorithm is guaranteed to find a solution that is at most twice the size of the optimal solution with a high probability. They also provide experimental results that show the effectiveness of the approach on benchmark instances of the Minimum Vertex Cover Problem.
Theory  Explanation: This paper presents a theoretical challenge to existing theories of perceptual learning, suggesting a more complex picture in which learning takes place at multiple levels. The paper does not discuss any specific AI sub-category such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper proposes a learning architecture based on Variable-Valued Logic, the Star Methodology, and the AQ algorithm. These methods involve probabilistic reasoning and decision-making.  Rule Learning: The proposed method uses a partial-memory approach, which means that in each step of learning, the system remembers the current concept descriptions and specially selected representative examples from the past experience. This approach involves learning rules from examples and using them to make predictions.
Genetic Algorithms.   Explanation: The paper describes the use of genetic programming, which is a subfield of genetic algorithms, for the automatic synthesis of small iterative machine-language programs. The authors use fitness functions and evolutionary operators such as crossover and mutation to evolve programs that can perform multiplication using only addition as the sole arithmetic operator. The paper does not mention any other sub-categories of AI.
This paper belongs to the sub-category of AI called Reinforcement Learning.   Explanation:  The paper describes an application of artificial neural networks and temporal difference learning to teach a computer program to play games. Reinforcement learning is a type of machine learning where an agent learns to make decisions by interacting with an environment and receiving feedback in the form of rewards or punishments. In this paper, the computer program learns to play games by receiving feedback in the form of a score or win/loss outcome. Temporal difference learning is a specific type of reinforcement learning algorithm that updates the value function of the agent based on the difference between predicted and actual rewards. Artificial neural networks are used to represent the value function and make predictions about future rewards. Therefore, this paper primarily belongs to the sub-category of Reinforcement Learning.
Genetic Algorithms, Neural Networks.   Genetic Algorithms are present in the paper as the GP-Music System uses a genetic programming algorithm to evolve short musical sequences. The system also uses an auto rater which is trained using a neural network based on rating data from user interactive runs. Therefore, Neural Networks are also present in the paper.
Probabilistic Methods.   Explanation: The paper describes a method for inducing features suitable for classifying time series data using Bayesian model induction principles. The use of Bayesian methods is a key aspect of probabilistic methods in AI.
Case Based, Rule Learning  Explanation:  - Case Based: The article presents a case-based approach on flexible query answering systems in two different application areas. The internal case memory is implemented as a Case Retrieval Net. - Rule Learning: The article mentions the use of a client server model combined with a web interface, which implies the use of rules to manage multi user access.
Genetic Algorithms, Reinforcement Learning  The paper belongs to the sub-categories of Genetic Algorithms and Reinforcement Learning.   Genetic Algorithms are present in the paper as the authors use a genetic algorithm to evolve fitness raters for the GP-Music system. They explain how the algorithm works and how it is used to optimize the fitness function.  Reinforcement Learning is also present in the paper as the authors use a reinforcement learning approach to train the fitness raters. They explain how the system is trained using a reward-based approach and how the fitness raters are evaluated based on their ability to predict the user's preferences.
Reinforcement Learning, Theory.  Reinforcement learning is present in the text as the paper discusses the use of dynamic programming to develop planners and controllers for nonlinear systems. Dynamic programming is a key component of reinforcement learning, which involves learning to make decisions based on rewards and punishments received from the environment.  Theory is also present in the text as the paper discusses the development of procedures to solve complex planning and control problems using second order local trajectory optimization. The paper also discusses the maintenance of global consistency of local models of the value function, which is a theoretical concept in dynamic programming.
Neural Networks, Case Based.   Neural Networks: The paper introduces the task rehearsal method (TRM), which is a knowledge-based inductive learning system that uses either the standard multiple task learning (MTL) or the MTL neural network methods.   Case Based: The paper discusses the need for a measure of task relatedness and introduces TRM, which stores representations of successfully learned tasks within domain knowledge. Virtual examples generated by domain knowledge are rehearsed in parallel with each new task using either the standard multiple task learning (MTL) or the MTL neural network methods. This approach is similar to case-based reasoning, where past cases are used to solve new problems.
Theory.   Explanation: This paper discusses the theoretical limits of parallelism imposed by control flow in computer programs. It does not focus on any specific AI sub-category such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Rule Learning, Inductive Logic Programming - The paper presents a framework for learning clause-selection heuristics to guide program execution in definite-clause logic programs. This combines techniques of explanation-based learning and inductive logic programming. The specific applications of this framework are program optimization and natural language acquisition, both of which involve learning rules to improve efficiency and accuracy.
Neural Networks, Theory.   Neural Networks: The paper discusses on-line generalized linear regression with multidimensional outputs, which involves neural networks with multiple output nodes but no hidden nodes. The approach is based on applying the notion of a matching loss function in two different contexts.  Theory: The paper presents a unified treatment that generalizes earlier results for the gradient descent and exponentiated gradient algorithms to multidimensional outputs, including multiclass logistic regression. The authors also discuss the use of a parameterization function and its role in transforming parameter vectors maintained by the algorithm into the actual weights. The paper also discusses the use of a loss function as a measure of distance between models and as a potential function in analyzing the relative performance of the algorithm compared to an arbitrary fixed model.
Rule Learning, Case Based  Explanation:   Rule Learning: The paper describes a high-level language and run-time environment that allows failure-handling strategies to be incorporated into existing Fortran and C analysis programs. These strategies are constructed from a knowledge base of generic problem management strategies, which can be seen as a set of rules for handling different types of problems that may arise during program execution.  Case Based: The paper discusses the domain of conceptual design of jet engine nozzles, and how the proposed approach is effective in improving analysis program robustness and design optimization performance in this domain. This can be seen as a case-based approach, where knowledge and experience from previous cases (i.e. previous designs of jet engine nozzles) are used to inform the design of new cases.
Theory.   Explanation: The paper is focused on analyzing the sample complexity of weak learning, which is a theoretical concept in machine learning. The authors do not use any specific AI techniques such as neural networks or reinforcement learning, but rather develop mathematical proofs and analyze the properties of different learning algorithms. Therefore, the paper belongs to the sub-category of AI theory.
Probabilistic Methods.   Explanation: The paper discusses the Expectation-Maximization (EM) algorithm, which is a probabilistic method for maximum likelihood parameter estimation. The authors provide a theoretical analysis of the algorithm and describe an acceleration technique for it. The paper does not discuss any other sub-categories of AI.
Reinforcement Learning, Rule Learning.   Reinforcement learning is the main focus of the paper, as it discusses the problem of an agent learning to act in the world through trial and error. The paper proposes an algorithm for this problem that performs an online search through the space of action mappings.   Rule learning is also relevant, as the algorithm proposed in the paper searches through the space of action mappings expressed as Boolean formulae. This can be seen as a form of rule learning, where the algorithm is searching for a set of rules that will lead to optimal behavior.
Case Based, Abductive Reasoning.   Case-based reasoning is explicitly mentioned in the paper as a component of the ADAPtER system, which uses a case memory and adaptation mechanisms to solve new cases based on past experience. Abductive reasoning is also a key component, as the system uses a logical model and abductive reasoning with consistency constraints to solve complex diagnostic problems involving multiple faults. The paper does not mention any other sub-categories of AI.
Neural Networks, Rule Learning.   Neural Networks: The paper discusses the limitations of commonly used neural network models and presents an alternative approach using Boolean networks. The algorithms presented in the paper generate Boolean networks from examples, which is a form of machine learning.   Rule Learning: The paper focuses on designing networks where each node implements a simple Boolean function, which can be seen as a set of rules. The algorithms presented in the paper generate these rules from examples, which is a form of rule learning. The paper also presents examples of applications where these rules are used for image reconstruction and hand-written character recognition.
Reinforcement Learning, Memory-based Learning.   Reinforcement learning is present in the paper as the authors use methods from optimal control to achieve fast real-time learning of the task within 40 to 100 trials.   Memory-based learning is also present in the paper as the authors use a memory-based local modeling approach (locally weighted regression) to represent a learned model of the task to be performed. They also develop an exploration algorithm that explicitly deals with prediction accuracy requirements during exploration.
Genetic Algorithms.   Explanation: The paper describes the use of genetic algorithms in engineering design optimization and proposes new operators and strategies to improve their efficiency and reliability. The entire paper is focused on the use and adaptation of genetic algorithms, making it the most related sub-category of AI.
Probabilistic Methods, Neural Networks.   Probabilistic Methods: The paper discusses the use of Bayesian networks, which are a probabilistic graphical model used to represent uncertain knowledge and make predictions based on probability theory. The authors also mention belief update, which is a key concept in probabilistic reasoning.  Neural Networks: The paper uses nonlinear conditional density estimators, which are a type of neural network used for density estimation. The authors also mention self-organization, which is a concept commonly associated with neural networks.
Theory.   Explanation: The paper discusses theoretical results related to minimax risk in estimation problems, and does not involve any specific AI techniques or applications.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper examines the task of visual search from a connectionist perspective, which suggests the use of neural networks. The paper describes a psychologically plausible system that uses a focus of attention mechanism to locate target objects, which is a common approach in neural network models.  Probabilistic Methods: The paper discusses the computational complexity of the task of visual search and suggests that parallel feed-forward networks cannot perform this task efficiently. This suggests the need for probabilistic methods to improve the efficiency of the search. The paper also describes a strategy that combines top-down and bottom-up information to minimize search time, which is a common approach in probabilistic models.
Rule Learning, Theory.   Explanation:   The paper presents an algorithm for inducing recursive clauses in a class of logic programs, which falls under the sub-category of Rule Learning in AI. The algorithm uses inverse implication as the underlying generalization method and applies to a specific class of logic programs similar to the class of primitive recursive functions. The paper also provides a theoretical analysis of the class of logic programs for which the approach is complete, which falls under the sub-category of Theory in AI.
Genetic Algorithms, Reinforcement Learning, Theory.   Genetic Algorithms: The paper discusses the coevolution of behavior in the pursuer-evader game, which is a classic example of a genetic algorithm. The authors mention the methodological hurdles in coevolutionary simulation raised by the game, which suggests that they are using a genetic algorithm to simulate the game.  Reinforcement Learning: The paper discusses the coevolution of behavior in the pursuer-evader game, which is a classic example of reinforcement learning. The authors mention the lack of a rigorous metric of agent behavior, which suggests that they are using reinforcement learning to train the agents.  Theory: The paper presents a new formulation of the pursuer-evader game that affords a rigorous measure of agent behavior and system dynamics. The authors use information theory to provide quantitative analysis of agent activity. The paper also discusses the communicative component of pursuit and evasion behavior, which suggests that they are using theory to understand the behavior of the agents.
Probabilistic Methods, Rule Learning.   Probabilistic Methods: The paper discusses the process of assigning values to parameters in accordance with given design requirements, constraints, and optimization criteria. This involves making probabilistic decisions based on the available information.  Rule Learning: The paper proposes a generic model of parametric design problem solving, which generalizes from existing methods for parametric design. This involves learning rules from previous problem-solving experiences and applying them to new problems.
Reinforcement Learning, Probabilistic Methods  Explanation:  - Reinforcement Learning: The paper describes a self-adjusting algorithm for packet routing in which a reinforcement learning method is embedded into each node of a network. This approach proves superior to routing based on precomputed shortest paths. - Probabilistic Methods: The nodes in the network use local information to keep accurate statistics on which routing policies lead to minimal routing times. This involves probabilistic methods to estimate the expected rewards of different actions.
Neural Networks, Theory.  Explanation:  This paper belongs to the sub-category of Neural Networks as it discusses the design of modular artificial neural networks. The paper also belongs to the sub-category of Theory as it explores the use of biological metaphors in the design of these networks. The author discusses the theoretical basis for using biological metaphors and how they can be applied to the design of artificial neural networks.
Theory  Explanation: The paper discusses a technique for detecting pipeline resource hazards based on finite state automata, which is a theoretical approach to solving the problem of efficient instruction scheduling. The paper does not mention any other sub-categories of AI such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods.   Explanation: The paper proposes a method for variable selection and estimation in Cox's proportional hazards model by minimizing the log partial likelihood subject to a constraint on the sum of the absolute values of the parameters. This is a probabilistic approach that aims to find the most likely set of variables that explain the survival data. The method is a variation of the "lasso" proposal of Tibshirani (1994), which is also a probabilistic method designed for linear regression. The simulations in the paper compare the accuracy of the lasso method with stepwise selection, which is another probabilistic method commonly used in Cox regression.
Neural Networks.   Explanation: The paper discusses the limitations of current artificial neural network systems and proposes a reflective neural network architecture to address these limitations. The paper also describes the use of submodules and a Pandemonium system to handle mapping tasks and decompose complex problems. The paper presents results from testing the architecture on two problem domains, including a handwritten digit problem and the parity problem. Overall, the paper focuses on the use and improvement of neural network systems.
This paper belongs to the sub-category of AI known as Neural Networks.   Explanation: The paper discusses the use of a neural net classifier and its ability to reject incorrect answers. The entire paper is focused on the use and implementation of neural networks in this context. No other sub-categories of AI are mentioned or discussed in the paper.
Reinforcement Learning.   Explanation: The paper focuses on implementing a reinforcement learning architecture for a reactive control system for a simulated race car. The authors explore the use of reinforcement learning networks to address the tuning problem and hypothesize that interacting reactions can be decomposed into separate control tasks resident in separate networks and coordinated through the tuning mechanism and a higher level controller. The paper does not discuss or mention any other sub-categories of AI.
Genetic Algorithms, Case Based  Explanation:  - Genetic Algorithms: The paper proposes a genetic algorithm based prototype learning system, PLEASE, for supervised classification problems. The genetic algorithm is used to evolve the number of prototypes per class and their positions on the input space. - Case Based: The system uses a set of prototypes for each of the possible classes, and the class of an input instance is determined by the prototype nearest to this instance. This is a form of case-based reasoning, where new cases are classified based on their similarity to previously learned cases (prototypes).
Neural Networks, Rule Learning.   Neural Networks: The paper compares two methods for refining uncertain knowledge bases using propositional certainty-factor rules, and both methods employ neural-network training to refine the certainties of existing rules and filter potential new rules.   Rule Learning: The paper specifically focuses on refining certainty-factor rule-bases using two different methods, one of which adds new rules symbolically and the other adds a complete set of potential new rules with very low certainty and allows neural-network training to filter and adjust these rules. The experimental results compare the effectiveness of these two rule learning methods.
Rule Learning, Theory.   Explanation:  - Rule Learning: The paper discusses an approach of constructing new attributes based on production rules, which is a subfield of rule learning.  - Theory: The paper presents a theoretical approach of improving decision trees by constructing conjunctive tests, which is a subfield of machine learning theory.
Neural Networks, Theory.   Neural Networks: The paper discusses the HyperBF model, which is a neural network model for understanding perceptual learning in visual hyperacuity tasks. The authors also propose a biologically plausible extension of the model that takes into account the functional architecture of early vision.  Theory: The paper presents a theoretical framework for understanding perceptual learning in vernier hyperacuity, specifically within the context of the HyperBF model. The authors explore various learning modes that can coexist within the framework and propose two unsupervised learning rules that may be involved in hyperacuity learning. They also report results of psychophysical experiments that support their hypothesis about activity-dependent presynaptic amplification in perceptual learning.
Probabilistic Methods.   Explanation: The paper discusses the performance of different types of Bayesian classifiers in a medical diagnosis domain, which is a probabilistic method of classification. The paper does not mention any other sub-categories of AI.
Neural Networks, Probabilistic Methods.   Neural Networks: The paper discusses the role of neural networks in early vision and how they contribute to egocentric spatial representation. It also mentions the use of neural networks in modeling visual attention and object recognition.  Probabilistic Methods: The paper discusses the use of probabilistic models in understanding visual perception and spatial representation. It also mentions the use of Bayesian inference in modeling visual attention and object recognition.
Probabilistic Methods.   Explanation: The paper describes statistical research and development work on hospital quality monitor data sets, which involves statistical analysis, exploration, and modeling of data from several quality monitors. The primary goal is to understand patterns of variability over time in hospital-level and monitor area-specific quality monitor measures, and patterns of dependencies between sets of monitors. The paper discusses the development of several classes of formal models, including hierarchical random effects time series models, which are probabilistic methods used to model single or multiple monitor time series. The paper presents results of analyses of the three monitor data sets, in both single and multiple monitor frameworks, and presents a variety of summary inferences in graphical displays. Therefore, the paper belongs to the sub-category of Probabilistic Methods in AI.
Neural Networks, Probabilistic Methods, Theory.   Neural Networks: The paper presents an application of the EM algorithm in neural net training.  Probabilistic Methods: The paper derives an EM algorithm to compute the lasso solution.  Theory: The paper shows the equivalence between adaptive ridge and lasso and derives an EM algorithm to compute the lasso solution.
Probabilistic Methods.   Explanation: The paper discusses the use of Gaussian processes, which are a probabilistic method, to model regression problems with input-dependent noise. The authors also use Markov chain Monte Carlo methods to sample from the posterior distribution of the noise rate.
Probabilistic Methods.   Explanation: The paper discusses the use of importance sampling, which is a probabilistic method for estimating properties of a target distribution by drawing samples from a different, easier-to-sample distribution. The paper also mentions the use of Markov chain transitions and annealing sequences, which are common techniques in probabilistic modeling and inference.
Neural Networks.   Explanation: The paper is specifically about radial basis function (RBF) networks, which are a type of artificial neural network. The paper discusses how RBF networks can be used for supervised learning tasks such as regression, classification, and time series prediction. Therefore, the paper is most closely related to the sub-category of AI known as Neural Networks.
Neural Networks.   Explanation: The paper discusses methods of inverting connectionist networks, which are a type of neural network. The paper specifically mentions recurrent, time-delayed, and discrete versions of these networks, which are all subcategories of neural networks. The simulation results also involve tasks commonly associated with neural networks, such as XOR and handwritten digit recognition.
Probabilistic Methods, Reinforcement Learning.   Probabilistic Methods: The paper discusses a continuous change in the target distribution, which can be modeled probabilistically. The authors propose a weighting scheme to estimate the error of a hypothesis, which is a probabilistic approach to learning.  Reinforcement Learning: The paper discusses how to minimize the error of a prediction in an environment that is changing over time. This is a key problem in reinforcement learning, where an agent must learn to make decisions in an environment that may change unpredictably. The authors propose a method for estimating the error of a hypothesis and using this estimate to improve the agent's predictions, which is a reinforcement learning approach.
Probabilistic Methods.   The paper discusses the use of regularization techniques in linear regression, which is a probabilistic method. The authors propose a new algorithm called "adaptive multiple penalization" which penalizes each parameter individually and automatically adjusts the penalties based on a global regularization hyperparameter. The hyperparameter is estimated using resampling techniques, which is a common approach in probabilistic methods. The paper also compares the performance of their algorithm with other regularization techniques, such as ridge regression and variable selection, which are also probabilistic methods.
Genetic Algorithms, Reinforcement Learning.   Genetic Algorithms are present in the paper as the authors use a Distributed Classifier System (DCS) which is a type of genetic algorithm to evolve a set of rules for controlling the robot. The DCS is based on the principles of genetic algorithms, where a population of rules is evolved through selection, crossover, and mutation.   Reinforcement Learning is also present in the paper as the authors use a reward-based approach to train the robot. The robot receives a reward for completing a task correctly and a penalty for completing it incorrectly. The authors use a Q-learning algorithm to update the robot's policy based on the rewards and penalties it receives.
Genetic Algorithms, Theory.  Explanation:  - Genetic Algorithms: The paper focuses on the Crossover operator, which is a common component of Genetic Programming (GP), a subfield of Genetic Algorithms. The MAX problem is used as an example to demonstrate the interaction between Crossover and a restriction on tree depth.  - Theory: The paper presents a theoretical analysis of the interaction between Crossover and a restriction on tree depth, using the MAX problem as a case study. The authors derive formulas and proofs to explain the behavior of the algorithm under different conditions.
Reinforcement Learning, Theory.   Reinforcement learning is the main focus of the paper, as the authors propose a model of efficient on-line reinforcement learning based on the expected mistake bound framework. The paper also falls under the category of Theory, as it presents a theoretical framework for analyzing the performance of the proposed model and shows its polynomial equivalence to the PAC model of off-line reinforcement learning.
Neural Networks, Reinforcement Learning  Explanation:  This paper belongs to the sub-category of Neural Networks because it focuses on the trainability of single neurons, which are the building blocks of neural networks. The paper discusses the use of reinforcement learning to train these neurons, which falls under the sub-category of Reinforcement Learning. The authors propose a method for training single neurons that is robust to noise and can handle non-linear input-output mappings. They demonstrate the effectiveness of their approach through simulations and experiments on real-world data. Overall, the paper provides insights into the trainability of single neurons and how they can be used to improve the performance of neural networks.
Probabilistic Methods.   The paper describes the use of statistical measures of similarity (SW, FASTA, BLAST) to create a graph of protein sequences, where the weight of an edge represents the degree of similarity. The authors then use a novel two-phase algorithm to merge clusters of related proteins based on transitivity and strong connectivity, and refine the classification based on a global test. The resulting hierarchical organization of all proteins is obtained at varying thresholds of statistical significance. Therefore, probabilistic methods are used throughout the paper to analyze and classify the space of all protein sequences.
Theory.   Explanation: The paper presents a system that learns and updates a diagnostic knowledge base using a combination of deductive reasoning from phenomenological theory, abductive reasoning from a causal model, and inductive reasoning from examples. The system handles the problems of imperfection and intractability of the theory by allowing the system to make assumptions during its reasoning. The system works in a first order logic environment and has been applied in a real domain. There is no mention of any of the other sub-categories of AI listed in the question.
Probabilistic Methods.   Explanation: The paper investigates the behaviour of the random walk Metropolis algorithm, which is a probabilistic method for sampling from a target distribution. The paper also discusses the optimal scaling of the proposal distribution, which is a probabilistic method for generating candidate samples in the Metropolis algorithm. The results are proved in the framework of a weak convergence result, which is a probabilistic method for studying the behaviour of stochastic processes.
Probabilistic Methods.   Explanation: The paper develops probabilistic bounds on out-of-sample error rates for several classifiers using a single set of in-sample data. The bounds are based on probabilities over partitions of the union of in-sample and out-of-sample data into in-sample and out-of-sample data sets. The bounds apply when in-sample and out-of-sample data are drawn from the same distribution. Therefore, the paper belongs to the sub-category of Probabilistic Methods in AI.
Rule Learning, Theory.   The paper presents an algorithm for regular grammar inference, which falls under the category of rule learning. The algorithm is based on a theoretical framework that allows for incremental updates to the grammar, which falls under the category of theory.
Theory.   Explanation: The paper presents a theoretical framework for learning DFA from simple examples and answers an open research question about the PAC-identifiability of DFA under certain conditions. The approach uses the RPNI algorithm, which is a theoretical algorithm for learning DFA from labeled examples. There is no mention of any other sub-categories of AI such as case-based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning.
Probabilistic Methods, Rule Learning  Probabilistic Methods: This paper belongs to the sub-category of Probabilistic Methods as it discusses belief maintenance using probabilistic logic. The authors propose a probabilistic logic-based approach to belief maintenance that can handle uncertain and incomplete information.  Rule Learning: This paper also belongs to the sub-category of Rule Learning as the authors propose a rule-based approach to belief maintenance. They use a set of rules to update beliefs based on new evidence and to resolve conflicts between beliefs. The rules are learned from examples and can be refined over time.
Probabilistic Methods.   Explanation: The paper discusses the use of Ignorant Belief Networks (IBNs) for forecasting glucose concentration in diabetic patients. IBNs are a type of probabilistic graphical model that represent uncertain relationships between variables using probability distributions. The paper describes how IBNs can be used to model the complex interactions between various factors that affect glucose concentration, such as insulin dosage, food intake, and physical activity. The authors also use Bayesian inference to update the probability distributions based on new data, which is a common technique in probabilistic methods. Therefore, this paper belongs to the sub-category of Probabilistic Methods in AI.
Probabilistic Methods.   Explanation: The paper discusses the convergence rate of independent Metropolis chains, which is a probabilistic method used in Markov chain Monte Carlo simulations. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, or Theory.
Probabilistic Methods.   Explanation: The paper's title explicitly mentions "Probabilistic Reasoning," and the abstract mentions "probabilistic models" and "Bayesian networks." The paper discusses how to reason under uncertainty and ignorance using probabilistic methods, such as Bayesian networks and Markov decision processes. The other sub-categories of AI listed (Case Based, Genetic Algorithms, Neural Networks, Reinforcement Learning, Rule Learning, Theory) are not mentioned or discussed in the paper.
Neural Networks.   Explanation: The paper describes a new connectionist architecture called Simple Synchrony Network (SSN), which incorporates Temporal Synchrony Variable Binding (TSVB) into Simple Recurrent Networks. The SSN is a type of neural network, and the paper focuses on training SSNs to parse natural language sentences. There is no mention of any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
This paper belongs to the sub-categories of Genetic Algorithms and Neural Networks.   Genetic Algorithms: The paper presents a new method based on Cellular Encoding and Genetic Programming to find good building-blocks for architectures of Artificial Neural Networks. The method involves the use of genetic operators such as mutation and crossover to evolve the network architectures.  Neural Networks: The paper deals with the combination of Evolutionary Algorithms and Artificial Neural Networks. The proposed method aims to find good building-blocks for architectures of Artificial Neural Networks by using Cellular Encoding and Genetic Programming. The paper also presents simulation results for two real-world problems using the proposed method.
Genetic Algorithms.   Explanation: The paper discusses the use of genetic programming and genetic algorithms to study the relationship between different tasks and performance. It also mentions the challenges of analyzing the mechanism of incremental evolution using genetic programming and the plan to investigate it using genetic algorithms with fixed-length genotypes. The paper does not mention any other sub-categories of AI.
Genetic Algorithms.   Explanation: The paper describes a genome compiler for Genetic Programming, which is a subfield of Artificial Intelligence that uses evolutionary algorithms to evolve solutions to problems. The paper specifically focuses on improving the efficiency of individual evaluations in Genetic Programming, which is a common challenge in this field. The use of machine code compilation is a novel approach to addressing this challenge and is specific to Genetic Programming. Therefore, this paper belongs to the sub-category of Genetic Algorithms within AI.
Genetic Algorithms.   Explanation: The paper focuses on the interaction between the Crossover operator and a restriction on tree depth in Genetic Programming (GP), specifically in the context of the MAX problem. The paper discusses the limitations and inadequacies of the Crossover operator in GP, and how it affects the diversity and fitness of the tree population. Therefore, the paper belongs to the sub-category of AI known as Genetic Algorithms.
Theory.   Explanation: The paper discusses a representation called the Functional Representation (FR) and proposes that it can provide the basis for capturing the causal aspects of the design rationale. The paper does not discuss any specific AI techniques such as case-based reasoning, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, or rule learning. Therefore, the paper is most related to the sub-category of AI called Theory, which deals with the development of formal models and representations for reasoning about intelligent systems.
Neural Networks.   Explanation: The paper presents a face detection system based on a retinally connected neural network. The system uses multiple networks and a bootstrap algorithm for training, which involves adding false detections into the training set. The paper does not mention any other sub-categories of AI such as Case Based, Genetic Algorithms, Probabilistic Methods, Reinforcement Learning, Rule Learning, or Theory.
